00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 1059 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3726 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.071 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.071 The recommended git tool is: git 00:00:00.071 using credential 00000000-0000-0000-0000-000000000002 00:00:00.074 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.103 Fetching changes from the remote Git repository 00:00:00.105 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.147 Using shallow fetch with depth 1 00:00:00.147 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.147 > git --version # timeout=10 00:00:00.182 > git --version # 'git version 2.39.2' 00:00:00.182 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.210 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.210 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.170 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.182 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.196 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.196 > git config core.sparsecheckout # timeout=10 00:00:05.207 > git read-tree -mu HEAD # timeout=10 00:00:05.223 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.250 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.250 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.349 [Pipeline] Start of Pipeline 00:00:05.365 [Pipeline] library 00:00:05.367 Loading library shm_lib@master 00:00:05.367 Library shm_lib@master is cached. Copying from home. 00:00:05.381 [Pipeline] node 00:00:05.391 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.392 [Pipeline] { 00:00:05.399 [Pipeline] catchError 00:00:05.400 [Pipeline] { 00:00:05.408 [Pipeline] wrap 00:00:05.413 [Pipeline] { 00:00:05.421 [Pipeline] stage 00:00:05.423 [Pipeline] { (Prologue) 00:00:05.630 [Pipeline] sh 00:00:05.911 + logger -p user.info -t JENKINS-CI 00:00:05.929 [Pipeline] echo 00:00:05.930 Node: WFP21 00:00:05.938 [Pipeline] sh 00:00:06.238 [Pipeline] setCustomBuildProperty 00:00:06.246 [Pipeline] echo 00:00:06.247 Cleanup processes 00:00:06.250 [Pipeline] sh 00:00:06.533 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.533 1068221 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.544 [Pipeline] sh 00:00:06.824 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.824 ++ grep -v 'sudo pgrep' 00:00:06.824 ++ awk '{print $1}' 00:00:06.824 + sudo kill -9 00:00:06.824 + true 00:00:06.835 [Pipeline] cleanWs 00:00:06.844 [WS-CLEANUP] Deleting project workspace... 00:00:06.844 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.851 [WS-CLEANUP] done 00:00:06.854 [Pipeline] setCustomBuildProperty 00:00:06.867 [Pipeline] sh 00:00:07.150 + sudo git config --global --replace-all safe.directory '*' 00:00:07.239 [Pipeline] httpRequest 00:00:07.879 [Pipeline] echo 00:00:07.881 Sorcerer 10.211.164.20 is alive 00:00:07.891 [Pipeline] retry 00:00:07.893 [Pipeline] { 00:00:07.907 [Pipeline] httpRequest 00:00:07.911 HttpMethod: GET 00:00:07.912 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.912 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.926 Response Code: HTTP/1.1 200 OK 00:00:07.926 Success: Status code 200 is in the accepted range: 200,404 00:00:07.927 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.556 [Pipeline] } 00:00:12.573 [Pipeline] // retry 00:00:12.582 [Pipeline] sh 00:00:12.867 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.883 [Pipeline] httpRequest 00:00:13.307 [Pipeline] echo 00:00:13.308 Sorcerer 10.211.164.20 is alive 00:00:13.318 [Pipeline] retry 00:00:13.320 [Pipeline] { 00:00:13.334 [Pipeline] httpRequest 00:00:13.338 HttpMethod: GET 00:00:13.338 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:13.340 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:13.356 Response Code: HTTP/1.1 200 OK 00:00:13.356 Success: Status code 200 is in the accepted range: 200,404 00:00:13.356 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:56.401 [Pipeline] } 00:00:56.419 [Pipeline] // retry 00:00:56.427 [Pipeline] sh 00:00:56.719 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:59.267 [Pipeline] sh 00:00:59.552 + git -C spdk log --oneline -n5 00:00:59.552 c13c99a5e test: Various fixes for Fedora40 00:00:59.552 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:59.552 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:59.552 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:59.552 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:59.571 [Pipeline] withCredentials 00:00:59.581 > git --version # timeout=10 00:00:59.593 > git --version # 'git version 2.39.2' 00:00:59.611 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:59.613 [Pipeline] { 00:00:59.622 [Pipeline] retry 00:00:59.624 [Pipeline] { 00:00:59.639 [Pipeline] sh 00:00:59.924 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:00.196 [Pipeline] } 00:01:00.214 [Pipeline] // retry 00:01:00.219 [Pipeline] } 00:01:00.235 [Pipeline] // withCredentials 00:01:00.245 [Pipeline] httpRequest 00:01:00.653 [Pipeline] echo 00:01:00.655 Sorcerer 10.211.164.20 is alive 00:01:00.664 [Pipeline] retry 00:01:00.666 [Pipeline] { 00:01:00.680 [Pipeline] httpRequest 00:01:00.685 HttpMethod: GET 00:01:00.685 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:00.686 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:00.697 Response Code: HTTP/1.1 200 OK 00:01:00.698 Success: Status code 200 is in the accepted range: 200,404 00:01:00.698 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:16.455 [Pipeline] } 00:01:16.473 [Pipeline] // retry 00:01:16.481 [Pipeline] sh 00:01:16.768 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.159 [Pipeline] sh 00:01:18.446 + git -C dpdk log --oneline -n5 00:01:18.446 caf0f5d395 version: 22.11.4 00:01:18.446 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:18.446 dc9c799c7d vhost: fix missing spinlock unlock 00:01:18.446 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:18.446 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:18.456 [Pipeline] } 00:01:18.470 [Pipeline] // stage 00:01:18.479 [Pipeline] stage 00:01:18.481 [Pipeline] { (Prepare) 00:01:18.499 [Pipeline] writeFile 00:01:18.514 [Pipeline] sh 00:01:18.799 + logger -p user.info -t JENKINS-CI 00:01:18.812 [Pipeline] sh 00:01:19.096 + logger -p user.info -t JENKINS-CI 00:01:19.108 [Pipeline] sh 00:01:19.393 + cat autorun-spdk.conf 00:01:19.393 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.393 SPDK_TEST_NVMF=1 00:01:19.393 SPDK_TEST_NVME_CLI=1 00:01:19.393 SPDK_TEST_NVMF_NICS=mlx5 00:01:19.393 SPDK_RUN_UBSAN=1 00:01:19.393 NET_TYPE=phy 00:01:19.393 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:19.393 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:19.400 RUN_NIGHTLY=1 00:01:19.405 [Pipeline] readFile 00:01:19.428 [Pipeline] withEnv 00:01:19.430 [Pipeline] { 00:01:19.442 [Pipeline] sh 00:01:19.784 + set -ex 00:01:19.784 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:19.784 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:19.784 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.784 ++ SPDK_TEST_NVMF=1 00:01:19.784 ++ SPDK_TEST_NVME_CLI=1 00:01:19.784 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:19.784 ++ SPDK_RUN_UBSAN=1 00:01:19.784 ++ NET_TYPE=phy 00:01:19.784 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:19.784 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:19.784 ++ RUN_NIGHTLY=1 00:01:19.784 + case $SPDK_TEST_NVMF_NICS in 00:01:19.784 + DRIVERS=mlx5_ib 00:01:19.784 + [[ -n mlx5_ib ]] 00:01:19.784 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:19.784 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:26.358 rmmod: ERROR: Module irdma is not currently loaded 00:01:26.358 rmmod: ERROR: Module i40iw is not currently loaded 00:01:26.358 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:26.358 + true 00:01:26.358 + for D in $DRIVERS 00:01:26.358 + sudo modprobe mlx5_ib 00:01:26.358 + exit 0 00:01:26.368 [Pipeline] } 00:01:26.383 [Pipeline] // withEnv 00:01:26.388 [Pipeline] } 00:01:26.403 [Pipeline] // stage 00:01:26.412 [Pipeline] catchError 00:01:26.414 [Pipeline] { 00:01:26.428 [Pipeline] timeout 00:01:26.428 Timeout set to expire in 1 hr 0 min 00:01:26.430 [Pipeline] { 00:01:26.444 [Pipeline] stage 00:01:26.446 [Pipeline] { (Tests) 00:01:26.460 [Pipeline] sh 00:01:26.748 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:26.748 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:26.748 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:26.748 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:26.748 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:26.748 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:26.748 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:26.748 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:26.748 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:26.748 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:26.748 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:26.748 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:26.748 + source /etc/os-release 00:01:26.748 ++ NAME='Fedora Linux' 00:01:26.748 ++ VERSION='39 (Cloud Edition)' 00:01:26.748 ++ ID=fedora 00:01:26.748 ++ VERSION_ID=39 00:01:26.748 ++ VERSION_CODENAME= 00:01:26.748 ++ PLATFORM_ID=platform:f39 00:01:26.748 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:26.748 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.748 ++ LOGO=fedora-logo-icon 00:01:26.748 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:26.748 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.748 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:26.748 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.748 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.748 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.748 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:26.748 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.748 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:26.748 ++ SUPPORT_END=2024-11-12 00:01:26.748 ++ VARIANT='Cloud Edition' 00:01:26.748 ++ VARIANT_ID=cloud 00:01:26.748 + uname -a 00:01:26.748 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:26.748 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:30.042 Hugepages 00:01:30.042 node hugesize free / total 00:01:30.042 node0 1048576kB 0 / 0 00:01:30.042 node0 2048kB 0 / 0 00:01:30.042 node1 1048576kB 0 / 0 00:01:30.042 node1 2048kB 0 / 0 00:01:30.042 00:01:30.042 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:30.042 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:30.042 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:30.042 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:30.042 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:30.042 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:30.042 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:30.042 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:30.042 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:30.042 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:30.042 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:30.042 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:30.042 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:30.042 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:30.042 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:30.042 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:30.042 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:30.042 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:30.042 + rm -f /tmp/spdk-ld-path 00:01:30.042 + source autorun-spdk.conf 00:01:30.042 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.042 ++ SPDK_TEST_NVMF=1 00:01:30.042 ++ SPDK_TEST_NVME_CLI=1 00:01:30.042 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:30.042 ++ SPDK_RUN_UBSAN=1 00:01:30.042 ++ NET_TYPE=phy 00:01:30.042 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:30.042 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:30.042 ++ RUN_NIGHTLY=1 00:01:30.042 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:30.042 + [[ -n '' ]] 00:01:30.042 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:30.042 + for M in /var/spdk/build-*-manifest.txt 00:01:30.042 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:30.042 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:30.042 + for M in /var/spdk/build-*-manifest.txt 00:01:30.042 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:30.042 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:30.042 + for M in /var/spdk/build-*-manifest.txt 00:01:30.042 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:30.042 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:30.042 ++ uname 00:01:30.042 + [[ Linux == \L\i\n\u\x ]] 00:01:30.042 + sudo dmesg -T 00:01:30.042 + sudo dmesg --clear 00:01:30.042 + dmesg_pid=1069186 00:01:30.042 + [[ Fedora Linux == FreeBSD ]] 00:01:30.042 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.042 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.042 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:30.042 + [[ -x /usr/src/fio-static/fio ]] 00:01:30.042 + export FIO_BIN=/usr/src/fio-static/fio 00:01:30.042 + FIO_BIN=/usr/src/fio-static/fio 00:01:30.042 + sudo dmesg -Tw 00:01:30.042 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:30.042 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:30.042 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:30.042 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.042 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.042 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:30.042 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.042 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.042 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:30.042 Test configuration: 00:01:30.042 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.042 SPDK_TEST_NVMF=1 00:01:30.042 SPDK_TEST_NVME_CLI=1 00:01:30.042 SPDK_TEST_NVMF_NICS=mlx5 00:01:30.042 SPDK_RUN_UBSAN=1 00:01:30.042 NET_TYPE=phy 00:01:30.042 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:30.042 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:30.042 RUN_NIGHTLY=1 06:41:51 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:30.042 06:41:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:30.042 06:41:51 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:30.042 06:41:51 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:30.042 06:41:51 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:30.042 06:41:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.042 06:41:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.042 06:41:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.042 06:41:51 -- paths/export.sh@5 -- $ export PATH 00:01:30.042 06:41:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.042 06:41:51 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:30.042 06:41:51 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:30.042 06:41:51 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734241311.XXXXXX 00:01:30.042 06:41:51 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734241311.2UUjwr 00:01:30.042 06:41:51 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:30.042 06:41:51 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:01:30.042 06:41:51 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:30.042 06:41:51 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:01:30.042 06:41:51 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:30.042 06:41:51 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:30.042 06:41:51 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:30.042 06:41:51 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:30.042 06:41:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.043 06:41:51 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:01:30.043 06:41:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:30.043 06:41:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:30.043 06:41:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:30.043 06:41:51 -- spdk/autobuild.sh@16 -- $ date -u 00:01:30.043 Sun Dec 15 05:41:51 AM UTC 2024 00:01:30.043 06:41:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:30.043 LTS-67-gc13c99a5e 00:01:30.043 06:41:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:30.043 06:41:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:30.043 06:41:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:30.043 06:41:51 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:30.043 06:41:51 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:30.043 06:41:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.043 ************************************ 00:01:30.043 START TEST ubsan 00:01:30.043 ************************************ 00:01:30.043 06:41:51 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:30.043 using ubsan 00:01:30.043 00:01:30.043 real 0m0.000s 00:01:30.043 user 0m0.000s 00:01:30.043 sys 0m0.000s 00:01:30.043 06:41:51 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:30.043 06:41:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.043 ************************************ 00:01:30.043 END TEST ubsan 00:01:30.043 ************************************ 00:01:30.303 06:41:51 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:30.303 06:41:51 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:30.303 06:41:51 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:30.303 06:41:51 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:30.303 06:41:51 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:30.303 06:41:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.303 ************************************ 00:01:30.303 START TEST build_native_dpdk 00:01:30.303 ************************************ 00:01:30.303 06:41:51 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:01:30.303 06:41:51 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:30.303 06:41:51 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:30.303 06:41:51 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:30.303 06:41:51 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:30.303 06:41:51 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:30.303 06:41:51 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:30.303 06:41:51 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:30.303 06:41:51 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:30.303 06:41:51 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:30.303 06:41:51 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:30.303 06:41:51 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:30.303 06:41:51 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:30.303 06:41:51 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:30.303 06:41:51 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:30.303 06:41:51 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:30.303 06:41:51 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:30.303 06:41:51 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:30.303 06:41:51 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:30.303 06:41:51 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:30.303 06:41:51 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:30.303 caf0f5d395 version: 22.11.4 00:01:30.303 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:30.303 dc9c799c7d vhost: fix missing spinlock unlock 00:01:30.303 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:30.303 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:30.303 06:41:51 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:30.303 06:41:51 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:30.303 06:41:51 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:30.303 06:41:51 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:30.303 06:41:51 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:30.303 06:41:51 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:30.303 06:41:51 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:30.303 06:41:51 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:30.303 06:41:51 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:30.303 06:41:51 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:30.303 06:41:51 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:30.303 06:41:51 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:30.303 06:41:51 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:30.303 06:41:51 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:30.303 06:41:51 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:30.303 06:41:51 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:30.303 06:41:51 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:30.303 06:41:51 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:30.303 06:41:51 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:30.303 06:41:51 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:30.303 06:41:51 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:30.303 06:41:51 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:30.303 06:41:51 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:30.303 06:41:51 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:30.303 06:41:51 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:30.303 06:41:51 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:30.303 06:41:51 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:30.303 06:41:51 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:30.303 06:41:51 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:30.303 06:41:51 -- scripts/common.sh@343 -- $ case "$op" in 00:01:30.303 06:41:51 -- scripts/common.sh@344 -- $ : 1 00:01:30.303 06:41:51 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:30.303 06:41:51 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:30.303 06:41:51 -- scripts/common.sh@364 -- $ decimal 22 00:01:30.303 06:41:51 -- scripts/common.sh@352 -- $ local d=22 00:01:30.303 06:41:51 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:30.303 06:41:51 -- scripts/common.sh@354 -- $ echo 22 00:01:30.303 06:41:51 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:30.303 06:41:51 -- scripts/common.sh@365 -- $ decimal 21 00:01:30.303 06:41:51 -- scripts/common.sh@352 -- $ local d=21 00:01:30.303 06:41:51 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:30.303 06:41:51 -- scripts/common.sh@354 -- $ echo 21 00:01:30.303 06:41:51 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:30.303 06:41:51 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:30.303 06:41:51 -- scripts/common.sh@366 -- $ return 1 00:01:30.303 06:41:51 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:30.303 patching file config/rte_config.h 00:01:30.303 Hunk #1 succeeded at 60 (offset 1 line). 00:01:30.303 06:41:51 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:30.303 06:41:51 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:30.303 06:41:51 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:30.303 06:41:51 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:30.303 06:41:51 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:30.303 06:41:51 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:30.303 06:41:51 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:30.303 06:41:51 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:30.303 06:41:51 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:30.303 06:41:51 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:30.303 06:41:51 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:30.303 06:41:51 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:30.303 06:41:51 -- scripts/common.sh@343 -- $ case "$op" in 00:01:30.303 06:41:51 -- scripts/common.sh@344 -- $ : 1 00:01:30.303 06:41:51 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:30.303 06:41:51 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:30.303 06:41:51 -- scripts/common.sh@364 -- $ decimal 22 00:01:30.303 06:41:51 -- scripts/common.sh@352 -- $ local d=22 00:01:30.303 06:41:51 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:30.303 06:41:51 -- scripts/common.sh@354 -- $ echo 22 00:01:30.303 06:41:51 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:30.303 06:41:51 -- scripts/common.sh@365 -- $ decimal 24 00:01:30.303 06:41:51 -- scripts/common.sh@352 -- $ local d=24 00:01:30.303 06:41:51 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:30.303 06:41:51 -- scripts/common.sh@354 -- $ echo 24 00:01:30.303 06:41:51 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:30.303 06:41:51 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:30.303 06:41:51 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:30.303 06:41:51 -- scripts/common.sh@367 -- $ return 0 00:01:30.303 06:41:51 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:30.303 patching file lib/pcapng/rte_pcapng.c 00:01:30.303 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:30.303 06:41:51 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:30.303 06:41:51 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:30.303 06:41:51 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:30.303 06:41:51 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:30.303 06:41:51 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:35.587 The Meson build system 00:01:35.587 Version: 1.5.0 00:01:35.587 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:35.587 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:01:35.587 Build type: native build 00:01:35.587 Program cat found: YES (/usr/bin/cat) 00:01:35.587 Project name: DPDK 00:01:35.587 Project version: 22.11.4 00:01:35.587 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:35.587 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:35.587 Host machine cpu family: x86_64 00:01:35.587 Host machine cpu: x86_64 00:01:35.587 Message: ## Building in Developer Mode ## 00:01:35.587 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:35.587 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:35.587 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:35.587 Program objdump found: YES (/usr/bin/objdump) 00:01:35.587 Program python3 found: YES (/usr/bin/python3) 00:01:35.587 Program cat found: YES (/usr/bin/cat) 00:01:35.587 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:35.587 Checking for size of "void *" : 8 00:01:35.587 Checking for size of "void *" : 8 (cached) 00:01:35.587 Library m found: YES 00:01:35.587 Library numa found: YES 00:01:35.587 Has header "numaif.h" : YES 00:01:35.587 Library fdt found: NO 00:01:35.587 Library execinfo found: NO 00:01:35.587 Has header "execinfo.h" : YES 00:01:35.587 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:35.587 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:35.587 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:35.587 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:35.587 Run-time dependency openssl found: YES 3.1.1 00:01:35.587 Run-time dependency libpcap found: YES 1.10.4 00:01:35.587 Has header "pcap.h" with dependency libpcap: YES 00:01:35.587 Compiler for C supports arguments -Wcast-qual: YES 00:01:35.587 Compiler for C supports arguments -Wdeprecated: YES 00:01:35.587 Compiler for C supports arguments -Wformat: YES 00:01:35.587 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:35.587 Compiler for C supports arguments -Wformat-security: NO 00:01:35.587 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:35.587 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:35.587 Compiler for C supports arguments -Wnested-externs: YES 00:01:35.587 Compiler for C supports arguments -Wold-style-definition: YES 00:01:35.587 Compiler for C supports arguments -Wpointer-arith: YES 00:01:35.587 Compiler for C supports arguments -Wsign-compare: YES 00:01:35.587 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:35.587 Compiler for C supports arguments -Wundef: YES 00:01:35.587 Compiler for C supports arguments -Wwrite-strings: YES 00:01:35.587 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:35.587 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:35.587 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:35.587 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:35.587 Compiler for C supports arguments -mavx512f: YES 00:01:35.587 Checking if "AVX512 checking" compiles: YES 00:01:35.587 Fetching value of define "__SSE4_2__" : 1 00:01:35.587 Fetching value of define "__AES__" : 1 00:01:35.587 Fetching value of define "__AVX__" : 1 00:01:35.587 Fetching value of define "__AVX2__" : 1 00:01:35.587 Fetching value of define "__AVX512BW__" : 1 00:01:35.587 Fetching value of define "__AVX512CD__" : 1 00:01:35.587 Fetching value of define "__AVX512DQ__" : 1 00:01:35.587 Fetching value of define "__AVX512F__" : 1 00:01:35.587 Fetching value of define "__AVX512VL__" : 1 00:01:35.587 Fetching value of define "__PCLMUL__" : 1 00:01:35.587 Fetching value of define "__RDRND__" : 1 00:01:35.587 Fetching value of define "__RDSEED__" : 1 00:01:35.587 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:35.587 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:35.587 Message: lib/kvargs: Defining dependency "kvargs" 00:01:35.587 Message: lib/telemetry: Defining dependency "telemetry" 00:01:35.587 Checking for function "getentropy" : YES 00:01:35.587 Message: lib/eal: Defining dependency "eal" 00:01:35.587 Message: lib/ring: Defining dependency "ring" 00:01:35.587 Message: lib/rcu: Defining dependency "rcu" 00:01:35.587 Message: lib/mempool: Defining dependency "mempool" 00:01:35.587 Message: lib/mbuf: Defining dependency "mbuf" 00:01:35.587 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:35.587 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:35.587 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:35.587 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:35.587 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:35.587 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:35.587 Compiler for C supports arguments -mpclmul: YES 00:01:35.587 Compiler for C supports arguments -maes: YES 00:01:35.587 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:35.587 Compiler for C supports arguments -mavx512bw: YES 00:01:35.587 Compiler for C supports arguments -mavx512dq: YES 00:01:35.587 Compiler for C supports arguments -mavx512vl: YES 00:01:35.587 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:35.587 Compiler for C supports arguments -mavx2: YES 00:01:35.587 Compiler for C supports arguments -mavx: YES 00:01:35.587 Message: lib/net: Defining dependency "net" 00:01:35.587 Message: lib/meter: Defining dependency "meter" 00:01:35.587 Message: lib/ethdev: Defining dependency "ethdev" 00:01:35.587 Message: lib/pci: Defining dependency "pci" 00:01:35.587 Message: lib/cmdline: Defining dependency "cmdline" 00:01:35.587 Message: lib/metrics: Defining dependency "metrics" 00:01:35.587 Message: lib/hash: Defining dependency "hash" 00:01:35.587 Message: lib/timer: Defining dependency "timer" 00:01:35.587 Fetching value of define "__AVX2__" : 1 (cached) 00:01:35.587 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:35.587 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:35.587 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:35.587 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:35.587 Message: lib/acl: Defining dependency "acl" 00:01:35.587 Message: lib/bbdev: Defining dependency "bbdev" 00:01:35.587 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:35.587 Run-time dependency libelf found: YES 0.191 00:01:35.587 Message: lib/bpf: Defining dependency "bpf" 00:01:35.587 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:35.587 Message: lib/compressdev: Defining dependency "compressdev" 00:01:35.587 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:35.587 Message: lib/distributor: Defining dependency "distributor" 00:01:35.587 Message: lib/efd: Defining dependency "efd" 00:01:35.587 Message: lib/eventdev: Defining dependency "eventdev" 00:01:35.587 Message: lib/gpudev: Defining dependency "gpudev" 00:01:35.587 Message: lib/gro: Defining dependency "gro" 00:01:35.587 Message: lib/gso: Defining dependency "gso" 00:01:35.587 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:35.587 Message: lib/jobstats: Defining dependency "jobstats" 00:01:35.587 Message: lib/latencystats: Defining dependency "latencystats" 00:01:35.587 Message: lib/lpm: Defining dependency "lpm" 00:01:35.587 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:35.587 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:35.587 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:35.587 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:35.587 Message: lib/member: Defining dependency "member" 00:01:35.587 Message: lib/pcapng: Defining dependency "pcapng" 00:01:35.587 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:35.587 Message: lib/power: Defining dependency "power" 00:01:35.587 Message: lib/rawdev: Defining dependency "rawdev" 00:01:35.587 Message: lib/regexdev: Defining dependency "regexdev" 00:01:35.587 Message: lib/dmadev: Defining dependency "dmadev" 00:01:35.587 Message: lib/rib: Defining dependency "rib" 00:01:35.587 Message: lib/reorder: Defining dependency "reorder" 00:01:35.587 Message: lib/sched: Defining dependency "sched" 00:01:35.587 Message: lib/security: Defining dependency "security" 00:01:35.587 Message: lib/stack: Defining dependency "stack" 00:01:35.587 Has header "linux/userfaultfd.h" : YES 00:01:35.587 Message: lib/vhost: Defining dependency "vhost" 00:01:35.587 Message: lib/ipsec: Defining dependency "ipsec" 00:01:35.587 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:35.587 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:35.587 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:35.587 Message: lib/fib: Defining dependency "fib" 00:01:35.587 Message: lib/port: Defining dependency "port" 00:01:35.587 Message: lib/pdump: Defining dependency "pdump" 00:01:35.587 Message: lib/table: Defining dependency "table" 00:01:35.587 Message: lib/pipeline: Defining dependency "pipeline" 00:01:35.587 Message: lib/graph: Defining dependency "graph" 00:01:35.587 Message: lib/node: Defining dependency "node" 00:01:35.587 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:35.587 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:35.587 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:35.587 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:35.587 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:35.587 Compiler for C supports arguments -Wno-unused-value: YES 00:01:35.587 Compiler for C supports arguments -Wno-format: YES 00:01:35.587 Compiler for C supports arguments -Wno-format-security: YES 00:01:35.587 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:36.160 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:36.160 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:36.160 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:36.160 Fetching value of define "__AVX2__" : 1 (cached) 00:01:36.160 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:36.160 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:36.160 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.160 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:36.160 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:36.160 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:36.160 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:36.160 Configuring doxy-api.conf using configuration 00:01:36.160 Program sphinx-build found: NO 00:01:36.160 Configuring rte_build_config.h using configuration 00:01:36.160 Message: 00:01:36.160 ================= 00:01:36.160 Applications Enabled 00:01:36.160 ================= 00:01:36.160 00:01:36.160 apps: 00:01:36.160 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:36.160 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:36.160 test-security-perf, 00:01:36.160 00:01:36.160 Message: 00:01:36.160 ================= 00:01:36.160 Libraries Enabled 00:01:36.160 ================= 00:01:36.160 00:01:36.160 libs: 00:01:36.160 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:36.160 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:36.160 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:36.160 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:36.160 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:36.160 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:36.160 table, pipeline, graph, node, 00:01:36.160 00:01:36.160 Message: 00:01:36.160 =============== 00:01:36.160 Drivers Enabled 00:01:36.160 =============== 00:01:36.160 00:01:36.160 common: 00:01:36.160 00:01:36.160 bus: 00:01:36.160 pci, vdev, 00:01:36.160 mempool: 00:01:36.160 ring, 00:01:36.160 dma: 00:01:36.160 00:01:36.160 net: 00:01:36.160 i40e, 00:01:36.160 raw: 00:01:36.160 00:01:36.160 crypto: 00:01:36.160 00:01:36.160 compress: 00:01:36.160 00:01:36.160 regex: 00:01:36.160 00:01:36.160 vdpa: 00:01:36.160 00:01:36.160 event: 00:01:36.160 00:01:36.160 baseband: 00:01:36.160 00:01:36.160 gpu: 00:01:36.160 00:01:36.160 00:01:36.160 Message: 00:01:36.160 ================= 00:01:36.160 Content Skipped 00:01:36.160 ================= 00:01:36.160 00:01:36.160 apps: 00:01:36.160 00:01:36.160 libs: 00:01:36.160 kni: explicitly disabled via build config (deprecated lib) 00:01:36.160 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:36.160 00:01:36.160 drivers: 00:01:36.160 common/cpt: not in enabled drivers build config 00:01:36.160 common/dpaax: not in enabled drivers build config 00:01:36.160 common/iavf: not in enabled drivers build config 00:01:36.160 common/idpf: not in enabled drivers build config 00:01:36.160 common/mvep: not in enabled drivers build config 00:01:36.160 common/octeontx: not in enabled drivers build config 00:01:36.160 bus/auxiliary: not in enabled drivers build config 00:01:36.160 bus/dpaa: not in enabled drivers build config 00:01:36.160 bus/fslmc: not in enabled drivers build config 00:01:36.160 bus/ifpga: not in enabled drivers build config 00:01:36.160 bus/vmbus: not in enabled drivers build config 00:01:36.160 common/cnxk: not in enabled drivers build config 00:01:36.160 common/mlx5: not in enabled drivers build config 00:01:36.160 common/qat: not in enabled drivers build config 00:01:36.160 common/sfc_efx: not in enabled drivers build config 00:01:36.160 mempool/bucket: not in enabled drivers build config 00:01:36.160 mempool/cnxk: not in enabled drivers build config 00:01:36.160 mempool/dpaa: not in enabled drivers build config 00:01:36.160 mempool/dpaa2: not in enabled drivers build config 00:01:36.160 mempool/octeontx: not in enabled drivers build config 00:01:36.160 mempool/stack: not in enabled drivers build config 00:01:36.160 dma/cnxk: not in enabled drivers build config 00:01:36.160 dma/dpaa: not in enabled drivers build config 00:01:36.160 dma/dpaa2: not in enabled drivers build config 00:01:36.160 dma/hisilicon: not in enabled drivers build config 00:01:36.160 dma/idxd: not in enabled drivers build config 00:01:36.160 dma/ioat: not in enabled drivers build config 00:01:36.160 dma/skeleton: not in enabled drivers build config 00:01:36.160 net/af_packet: not in enabled drivers build config 00:01:36.160 net/af_xdp: not in enabled drivers build config 00:01:36.160 net/ark: not in enabled drivers build config 00:01:36.160 net/atlantic: not in enabled drivers build config 00:01:36.160 net/avp: not in enabled drivers build config 00:01:36.160 net/axgbe: not in enabled drivers build config 00:01:36.160 net/bnx2x: not in enabled drivers build config 00:01:36.160 net/bnxt: not in enabled drivers build config 00:01:36.160 net/bonding: not in enabled drivers build config 00:01:36.160 net/cnxk: not in enabled drivers build config 00:01:36.161 net/cxgbe: not in enabled drivers build config 00:01:36.161 net/dpaa: not in enabled drivers build config 00:01:36.161 net/dpaa2: not in enabled drivers build config 00:01:36.161 net/e1000: not in enabled drivers build config 00:01:36.161 net/ena: not in enabled drivers build config 00:01:36.161 net/enetc: not in enabled drivers build config 00:01:36.161 net/enetfec: not in enabled drivers build config 00:01:36.161 net/enic: not in enabled drivers build config 00:01:36.161 net/failsafe: not in enabled drivers build config 00:01:36.161 net/fm10k: not in enabled drivers build config 00:01:36.161 net/gve: not in enabled drivers build config 00:01:36.161 net/hinic: not in enabled drivers build config 00:01:36.161 net/hns3: not in enabled drivers build config 00:01:36.161 net/iavf: not in enabled drivers build config 00:01:36.161 net/ice: not in enabled drivers build config 00:01:36.161 net/idpf: not in enabled drivers build config 00:01:36.161 net/igc: not in enabled drivers build config 00:01:36.161 net/ionic: not in enabled drivers build config 00:01:36.161 net/ipn3ke: not in enabled drivers build config 00:01:36.161 net/ixgbe: not in enabled drivers build config 00:01:36.161 net/kni: not in enabled drivers build config 00:01:36.161 net/liquidio: not in enabled drivers build config 00:01:36.161 net/mana: not in enabled drivers build config 00:01:36.161 net/memif: not in enabled drivers build config 00:01:36.161 net/mlx4: not in enabled drivers build config 00:01:36.161 net/mlx5: not in enabled drivers build config 00:01:36.161 net/mvneta: not in enabled drivers build config 00:01:36.161 net/mvpp2: not in enabled drivers build config 00:01:36.161 net/netvsc: not in enabled drivers build config 00:01:36.161 net/nfb: not in enabled drivers build config 00:01:36.161 net/nfp: not in enabled drivers build config 00:01:36.161 net/ngbe: not in enabled drivers build config 00:01:36.161 net/null: not in enabled drivers build config 00:01:36.161 net/octeontx: not in enabled drivers build config 00:01:36.161 net/octeon_ep: not in enabled drivers build config 00:01:36.161 net/pcap: not in enabled drivers build config 00:01:36.161 net/pfe: not in enabled drivers build config 00:01:36.161 net/qede: not in enabled drivers build config 00:01:36.161 net/ring: not in enabled drivers build config 00:01:36.161 net/sfc: not in enabled drivers build config 00:01:36.161 net/softnic: not in enabled drivers build config 00:01:36.161 net/tap: not in enabled drivers build config 00:01:36.161 net/thunderx: not in enabled drivers build config 00:01:36.161 net/txgbe: not in enabled drivers build config 00:01:36.161 net/vdev_netvsc: not in enabled drivers build config 00:01:36.161 net/vhost: not in enabled drivers build config 00:01:36.161 net/virtio: not in enabled drivers build config 00:01:36.161 net/vmxnet3: not in enabled drivers build config 00:01:36.161 raw/cnxk_bphy: not in enabled drivers build config 00:01:36.161 raw/cnxk_gpio: not in enabled drivers build config 00:01:36.161 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:36.161 raw/ifpga: not in enabled drivers build config 00:01:36.161 raw/ntb: not in enabled drivers build config 00:01:36.161 raw/skeleton: not in enabled drivers build config 00:01:36.161 crypto/armv8: not in enabled drivers build config 00:01:36.161 crypto/bcmfs: not in enabled drivers build config 00:01:36.161 crypto/caam_jr: not in enabled drivers build config 00:01:36.161 crypto/ccp: not in enabled drivers build config 00:01:36.161 crypto/cnxk: not in enabled drivers build config 00:01:36.161 crypto/dpaa_sec: not in enabled drivers build config 00:01:36.161 crypto/dpaa2_sec: not in enabled drivers build config 00:01:36.161 crypto/ipsec_mb: not in enabled drivers build config 00:01:36.161 crypto/mlx5: not in enabled drivers build config 00:01:36.161 crypto/mvsam: not in enabled drivers build config 00:01:36.161 crypto/nitrox: not in enabled drivers build config 00:01:36.161 crypto/null: not in enabled drivers build config 00:01:36.161 crypto/octeontx: not in enabled drivers build config 00:01:36.161 crypto/openssl: not in enabled drivers build config 00:01:36.161 crypto/scheduler: not in enabled drivers build config 00:01:36.161 crypto/uadk: not in enabled drivers build config 00:01:36.161 crypto/virtio: not in enabled drivers build config 00:01:36.161 compress/isal: not in enabled drivers build config 00:01:36.161 compress/mlx5: not in enabled drivers build config 00:01:36.161 compress/octeontx: not in enabled drivers build config 00:01:36.161 compress/zlib: not in enabled drivers build config 00:01:36.161 regex/mlx5: not in enabled drivers build config 00:01:36.161 regex/cn9k: not in enabled drivers build config 00:01:36.161 vdpa/ifc: not in enabled drivers build config 00:01:36.161 vdpa/mlx5: not in enabled drivers build config 00:01:36.161 vdpa/sfc: not in enabled drivers build config 00:01:36.161 event/cnxk: not in enabled drivers build config 00:01:36.161 event/dlb2: not in enabled drivers build config 00:01:36.161 event/dpaa: not in enabled drivers build config 00:01:36.161 event/dpaa2: not in enabled drivers build config 00:01:36.161 event/dsw: not in enabled drivers build config 00:01:36.161 event/opdl: not in enabled drivers build config 00:01:36.161 event/skeleton: not in enabled drivers build config 00:01:36.161 event/sw: not in enabled drivers build config 00:01:36.161 event/octeontx: not in enabled drivers build config 00:01:36.161 baseband/acc: not in enabled drivers build config 00:01:36.161 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:36.161 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:36.161 baseband/la12xx: not in enabled drivers build config 00:01:36.161 baseband/null: not in enabled drivers build config 00:01:36.161 baseband/turbo_sw: not in enabled drivers build config 00:01:36.161 gpu/cuda: not in enabled drivers build config 00:01:36.161 00:01:36.161 00:01:36.161 Build targets in project: 311 00:01:36.161 00:01:36.161 DPDK 22.11.4 00:01:36.161 00:01:36.161 User defined options 00:01:36.161 libdir : lib 00:01:36.161 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:36.161 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:36.161 c_link_args : 00:01:36.161 enable_docs : false 00:01:36.161 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:36.161 enable_kmods : false 00:01:36.161 machine : native 00:01:36.161 tests : false 00:01:36.161 00:01:36.161 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:36.161 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:36.161 06:41:57 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:01:36.161 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:36.161 [1/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:36.161 [2/740] Generating lib/rte_telemetry_def with a custom command 00:01:36.161 [3/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:36.161 [4/740] Generating lib/rte_kvargs_def with a custom command 00:01:36.428 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:36.428 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:36.428 [7/740] Generating lib/rte_ring_mingw with a custom command 00:01:36.428 [8/740] Generating lib/rte_rcu_def with a custom command 00:01:36.428 [9/740] Generating lib/rte_eal_mingw with a custom command 00:01:36.428 [10/740] Generating lib/rte_ring_def with a custom command 00:01:36.428 [11/740] Generating lib/rte_rcu_mingw with a custom command 00:01:36.428 [12/740] Generating lib/rte_mempool_def with a custom command 00:01:36.428 [13/740] Generating lib/rte_mempool_mingw with a custom command 00:01:36.428 [14/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:36.428 [15/740] Generating lib/rte_net_mingw with a custom command 00:01:36.428 [16/740] Generating lib/rte_meter_mingw with a custom command 00:01:36.428 [17/740] Generating lib/rte_eal_def with a custom command 00:01:36.428 [18/740] Generating lib/rte_mbuf_def with a custom command 00:01:36.428 [19/740] Generating lib/rte_net_def with a custom command 00:01:36.428 [20/740] Generating lib/rte_meter_def with a custom command 00:01:36.428 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:36.428 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:36.428 [23/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:36.428 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:36.428 [25/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:36.428 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:36.428 [27/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:36.428 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:36.428 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:36.428 [30/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:36.428 [31/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:36.428 [32/740] Generating lib/rte_pci_mingw with a custom command 00:01:36.428 [33/740] Generating lib/rte_ethdev_def with a custom command 00:01:36.428 [34/740] Generating lib/rte_pci_def with a custom command 00:01:36.428 [35/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:36.428 [36/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:36.428 [37/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:36.428 [38/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:36.428 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:36.428 [40/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:36.428 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:36.428 [42/740] Linking static target lib/librte_kvargs.a 00:01:36.428 [43/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:36.429 [44/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:36.429 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:36.429 [46/740] Generating lib/rte_cmdline_def with a custom command 00:01:36.429 [47/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:36.429 [48/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:36.429 [49/740] Generating lib/rte_metrics_mingw with a custom command 00:01:36.429 [50/740] Generating lib/rte_metrics_def with a custom command 00:01:36.429 [51/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:36.429 [52/740] Generating lib/rte_hash_def with a custom command 00:01:36.429 [53/740] Generating lib/rte_hash_mingw with a custom command 00:01:36.429 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:36.429 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:36.429 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:36.429 [57/740] Generating lib/rte_timer_def with a custom command 00:01:36.429 [58/740] Generating lib/rte_timer_mingw with a custom command 00:01:36.429 [59/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:36.429 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:36.429 [61/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:36.429 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:36.429 [63/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:36.429 [64/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:36.429 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:36.429 [66/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:36.429 [67/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:36.429 [68/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:36.429 [69/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:36.688 [70/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:36.688 [71/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:36.688 [72/740] Generating lib/rte_acl_def with a custom command 00:01:36.688 [73/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:36.688 [74/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:36.688 [75/740] Generating lib/rte_bitratestats_def with a custom command 00:01:36.688 [76/740] Generating lib/rte_acl_mingw with a custom command 00:01:36.688 [77/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:36.688 [78/740] Generating lib/rte_bbdev_def with a custom command 00:01:36.688 [79/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:36.688 [80/740] Linking static target lib/librte_pci.a 00:01:36.688 [81/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:36.688 [82/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:36.688 [83/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:36.688 [84/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:36.688 [85/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:36.688 [86/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:36.688 [87/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:36.688 [88/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:36.688 [89/740] Generating lib/rte_bpf_mingw with a custom command 00:01:36.688 [90/740] Generating lib/rte_cfgfile_def with a custom command 00:01:36.688 [91/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:36.688 [92/740] Generating lib/rte_bpf_def with a custom command 00:01:36.688 [93/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:36.688 [94/740] Linking static target lib/librte_meter.a 00:01:36.688 [95/740] Generating lib/rte_compressdev_def with a custom command 00:01:36.688 [96/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:36.688 [97/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:36.688 [98/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:36.688 [99/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:36.688 [100/740] Generating lib/rte_compressdev_mingw with a custom command 00:01:36.688 [101/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:36.688 [102/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:36.688 [103/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:36.688 [104/740] Linking static target lib/librte_ring.a 00:01:36.688 [105/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:36.688 [106/740] Generating lib/rte_cryptodev_def with a custom command 00:01:36.688 [107/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:36.688 [108/740] Generating lib/rte_cryptodev_mingw with a custom command 00:01:36.688 [109/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:36.688 [110/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:36.688 [111/740] Generating lib/rte_distributor_mingw with a custom command 00:01:36.688 [112/740] Generating lib/rte_efd_mingw with a custom command 00:01:36.688 [113/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:36.688 [114/740] Generating lib/rte_distributor_def with a custom command 00:01:36.688 [115/740] Generating lib/rte_efd_def with a custom command 00:01:36.688 [116/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:36.688 [117/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:36.688 [118/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:36.688 [119/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:36.688 [120/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:36.688 [121/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:36.688 [122/740] Generating lib/rte_eventdev_def with a custom command 00:01:36.688 [123/740] Generating lib/rte_eventdev_mingw with a custom command 00:01:36.688 [124/740] Generating lib/rte_gpudev_mingw with a custom command 00:01:36.688 [125/740] Generating lib/rte_gpudev_def with a custom command 00:01:36.688 [126/740] Generating lib/rte_gro_def with a custom command 00:01:36.688 [127/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:36.688 [128/740] Generating lib/rte_gro_mingw with a custom command 00:01:36.688 [129/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:36.688 [130/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:36.688 [131/740] Generating lib/rte_gso_def with a custom command 00:01:36.688 [132/740] Generating lib/rte_gso_mingw with a custom command 00:01:36.957 [133/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:36.957 [134/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:36.957 [135/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.957 [136/740] Generating lib/rte_ip_frag_def with a custom command 00:01:36.957 [137/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.957 [138/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:36.957 [139/740] Generating lib/rte_ip_frag_mingw with a custom command 00:01:36.957 [140/740] Linking target lib/librte_kvargs.so.23.0 00:01:36.957 [141/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:36.957 [142/740] Generating lib/rte_jobstats_def with a custom command 00:01:36.957 [143/740] Generating lib/rte_jobstats_mingw with a custom command 00:01:36.957 [144/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:36.957 [145/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.957 [146/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:36.957 [147/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:36.957 [148/740] Generating lib/rte_latencystats_mingw with a custom command 00:01:36.957 [149/740] Generating lib/rte_latencystats_def with a custom command 00:01:36.957 [150/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:36.957 [151/740] Linking static target lib/librte_cfgfile.a 00:01:36.957 [152/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:36.957 [153/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:36.957 [154/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:36.957 [155/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:36.957 [156/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:36.957 [157/740] Generating lib/rte_lpm_def with a custom command 00:01:36.957 [158/740] Generating lib/rte_lpm_mingw with a custom command 00:01:36.957 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:36.957 [160/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:36.957 [161/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:36.957 [162/740] Generating lib/rte_member_def with a custom command 00:01:36.957 [163/740] Generating lib/rte_member_mingw with a custom command 00:01:36.957 [164/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:37.217 [165/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.217 [166/740] Generating lib/rte_pcapng_def with a custom command 00:01:37.217 [167/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:37.217 [168/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:37.217 [169/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:37.217 [170/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:37.217 [171/740] Generating lib/rte_pcapng_mingw with a custom command 00:01:37.217 [172/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:37.217 [173/740] Linking static target lib/librte_jobstats.a 00:01:37.217 [174/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:37.217 [175/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:37.217 [176/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:37.217 [177/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:37.217 [178/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:37.217 [179/740] Generating lib/rte_power_def with a custom command 00:01:37.217 [180/740] Generating lib/rte_power_mingw with a custom command 00:01:37.217 [181/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:37.217 [182/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:37.217 [183/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:37.217 [184/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:37.217 [185/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:37.217 [186/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:37.217 [187/740] Linking static target lib/librte_cmdline.a 00:01:37.217 [188/740] Generating lib/rte_rawdev_def with a custom command 00:01:37.217 [189/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:37.217 [190/740] Linking static target lib/librte_timer.a 00:01:37.217 [191/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:37.217 [192/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:37.217 [193/740] Generating lib/rte_rawdev_mingw with a custom command 00:01:37.217 [194/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:37.217 [195/740] Linking static target lib/librte_telemetry.a 00:01:37.217 [196/740] Linking static target lib/librte_metrics.a 00:01:37.217 [197/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:37.217 [198/740] Generating lib/rte_regexdev_def with a custom command 00:01:37.217 [199/740] Generating lib/rte_regexdev_mingw with a custom command 00:01:37.217 [200/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:37.217 [201/740] Generating lib/rte_dmadev_mingw with a custom command 00:01:37.217 [202/740] Generating lib/rte_dmadev_def with a custom command 00:01:37.217 [203/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:37.217 [204/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:37.217 [205/740] Generating lib/rte_rib_mingw with a custom command 00:01:37.217 [206/740] Generating lib/rte_rib_def with a custom command 00:01:37.217 [207/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:37.217 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:37.217 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:37.217 [210/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:37.217 [211/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:37.217 [212/740] Generating lib/rte_reorder_def with a custom command 00:01:37.217 [213/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:37.217 [214/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:37.217 [215/740] Generating lib/rte_reorder_mingw with a custom command 00:01:37.217 [216/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:37.217 [217/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:37.217 [218/740] Generating lib/rte_sched_def with a custom command 00:01:37.217 [219/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:37.217 [220/740] Linking static target lib/librte_net.a 00:01:37.217 [221/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:37.217 [222/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:37.217 [223/740] Generating lib/rte_sched_mingw with a custom command 00:01:37.217 [224/740] Generating lib/rte_security_def with a custom command 00:01:37.217 [225/740] Generating lib/rte_security_mingw with a custom command 00:01:37.217 [226/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:37.217 [227/740] Generating lib/rte_stack_mingw with a custom command 00:01:37.217 [228/740] Generating lib/rte_stack_def with a custom command 00:01:37.217 [229/740] Linking static target lib/librte_bitratestats.a 00:01:37.217 [230/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:37.217 [231/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:37.217 [232/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:37.217 [233/740] Generating lib/rte_vhost_def with a custom command 00:01:37.217 [234/740] Generating lib/rte_vhost_mingw with a custom command 00:01:37.217 [235/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:37.217 [236/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:37.479 [237/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:37.479 [238/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:37.479 [239/740] Generating lib/rte_ipsec_def with a custom command 00:01:37.479 [240/740] Generating lib/rte_ipsec_mingw with a custom command 00:01:37.479 [241/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:37.479 [242/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:37.479 [243/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:37.479 [244/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:37.479 [245/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:37.479 [246/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:37.479 [247/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:37.479 [248/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:37.479 [249/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:37.479 [250/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:37.479 [251/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:37.479 [252/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:37.479 [253/740] Linking static target lib/librte_stack.a 00:01:37.479 [254/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:37.479 [255/740] Generating lib/rte_fib_def with a custom command 00:01:37.479 [256/740] Generating lib/rte_fib_mingw with a custom command 00:01:37.479 [257/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:37.479 [258/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:37.479 [259/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:37.479 [260/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:37.479 [261/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:37.479 [262/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:37.479 [263/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:37.479 [264/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:37.479 [265/740] Generating lib/rte_port_def with a custom command 00:01:37.479 [266/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:37.479 [267/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:37.479 [268/740] Generating lib/rte_port_mingw with a custom command 00:01:37.479 [269/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:37.479 [270/740] Generating lib/rte_pdump_mingw with a custom command 00:01:37.479 [271/740] Linking static target lib/librte_compressdev.a 00:01:37.479 [272/740] Generating lib/rte_pdump_def with a custom command 00:01:37.479 [273/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:37.479 [274/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:37.479 [275/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.479 [276/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:37.479 [277/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:37.479 [278/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.740 [279/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:37.740 [280/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:37.740 [281/740] Linking static target lib/librte_rcu.a 00:01:37.740 [282/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:37.740 [283/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:37.740 [284/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:37.740 [285/740] Linking static target lib/librte_rawdev.a 00:01:37.740 [286/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:37.741 [287/740] Linking static target lib/librte_mempool.a 00:01:37.741 [288/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.741 [289/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.741 [290/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:37.741 [291/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:37.741 [292/740] Generating lib/rte_table_def with a custom command 00:01:37.741 [293/740] Generating lib/rte_table_mingw with a custom command 00:01:37.741 [294/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:37.741 [295/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:37.741 [296/740] Linking static target lib/librte_bbdev.a 00:01:37.741 [297/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:37.741 [298/740] Linking static target lib/librte_dmadev.a 00:01:37.741 [299/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:37.741 [300/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:37.741 [301/740] Linking static target lib/librte_gro.a 00:01:37.741 [302/740] Linking static target lib/librte_gpudev.a 00:01:37.741 [303/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.741 [304/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.741 [305/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:37.741 [306/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:37.741 [307/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.741 [308/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:37.741 [309/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:37.741 [310/740] Generating lib/rte_pipeline_def with a custom command 00:01:37.741 [311/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:37.741 [312/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.741 [313/740] Linking static target lib/librte_latencystats.a 00:01:37.741 [314/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:37.741 [315/740] Generating lib/rte_pipeline_mingw with a custom command 00:01:37.741 [316/740] Linking static target lib/librte_gso.a 00:01:37.741 [317/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:37.741 [318/740] Linking target lib/librte_telemetry.so.23.0 00:01:37.741 [319/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:37.741 [320/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:37.741 [321/740] Generating lib/rte_graph_def with a custom command 00:01:37.741 [322/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:37.741 [323/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:37.741 [324/740] Generating lib/rte_graph_mingw with a custom command 00:01:38.003 [325/740] Linking static target lib/librte_distributor.a 00:01:38.003 [326/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:38.003 [327/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:38.003 [328/740] Linking static target lib/librte_ip_frag.a 00:01:38.003 [329/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:38.003 [330/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:38.003 [331/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:38.003 [332/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:38.003 [333/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:38.003 [334/740] Linking static target lib/librte_regexdev.a 00:01:38.003 [335/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:38.003 [336/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:38.003 [337/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:38.003 [338/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:38.003 [339/740] Generating lib/rte_node_def with a custom command 00:01:38.003 [340/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:38.003 [341/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:38.003 [342/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:38.003 [343/740] Generating lib/rte_node_mingw with a custom command 00:01:38.003 [344/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:38.003 [345/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:38.003 [346/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:38.003 [347/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:38.003 [348/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.003 [349/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:38.003 [350/740] Generating drivers/rte_bus_pci_def with a custom command 00:01:38.003 [351/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:38.003 [352/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.003 [353/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:38.003 [354/740] Linking static target lib/librte_eal.a 00:01:38.003 [355/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:38.264 [356/740] Linking static target lib/librte_reorder.a 00:01:38.264 [357/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.264 [358/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:38.264 [359/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:38.264 [360/740] Generating drivers/rte_bus_vdev_def with a custom command 00:01:38.264 [361/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:38.264 [362/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:38.264 [363/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:38.264 [364/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:38.264 [365/740] Generating drivers/rte_mempool_ring_def with a custom command 00:01:38.265 [366/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.265 [367/740] Linking static target lib/librte_power.a 00:01:38.265 [368/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:38.265 [369/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:38.265 [370/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:38.265 [371/740] Linking static target lib/librte_security.a 00:01:38.265 [372/740] Linking static target lib/librte_pcapng.a 00:01:38.265 [373/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:38.265 [374/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:38.265 [375/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:38.265 [376/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.265 [377/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:38.265 [378/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:38.265 [379/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:38.265 [380/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:38.265 [381/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:38.265 [382/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:38.265 [383/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:38.265 [384/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:38.265 [385/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:38.265 [386/740] Linking static target lib/librte_mbuf.a 00:01:38.265 [387/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.265 [388/740] Generating drivers/rte_net_i40e_def with a custom command 00:01:38.265 [389/740] Linking static target lib/librte_bpf.a 00:01:38.265 [390/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:38.265 [391/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:38.527 [392/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:38.527 [393/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:38.527 [394/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.527 [395/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:38.527 [396/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:38.527 [397/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:38.527 [398/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:38.527 [399/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:38.527 [400/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:38.527 [401/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:38.527 [402/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:38.527 [403/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:38.527 [404/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:38.527 [405/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:38.527 [406/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:38.527 [407/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:38.527 [408/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:38.527 [409/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:38.527 [410/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.527 [411/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:38.527 [412/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:38.527 [413/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:38.527 [414/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:38.527 [415/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:38.527 [416/740] Linking static target lib/librte_lpm.a 00:01:38.527 [417/740] Linking static target lib/librte_rib.a 00:01:38.527 [418/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:38.527 [419/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:38.527 [420/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.527 [421/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:38.527 [422/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:38.527 [423/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:38.527 [424/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.527 [425/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:38.527 [426/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:38.527 [427/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:38.527 [428/740] Linking static target lib/librte_graph.a 00:01:38.527 [429/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:38.527 [430/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:38.527 [431/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:38.793 [432/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:38.793 [433/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:38.793 [434/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:38.793 [435/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:38.793 [436/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.793 [437/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:38.793 [438/740] Linking static target lib/librte_efd.a 00:01:38.793 [439/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:38.793 [440/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:38.793 [441/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.793 [442/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:38.793 [443/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:38.793 [444/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:38.793 [445/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:38.793 [446/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:38.793 [447/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:38.793 [448/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:38.793 [449/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:38.793 [450/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.793 [451/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.793 [452/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.793 [453/740] Linking static target drivers/librte_bus_vdev.a 00:01:38.793 [454/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.054 [455/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:39.054 [456/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:39.054 [457/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:39.054 [458/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:39.054 [459/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.054 [460/740] Linking static target lib/librte_fib.a 00:01:39.054 [461/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.054 [462/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:39.054 [463/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.054 [464/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.054 [465/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:39.054 [466/740] Linking static target lib/librte_pdump.a 00:01:39.054 [467/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:39.054 [468/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:39.315 [469/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:39.315 [470/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.315 [471/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:39.315 [472/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.315 [473/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:39.315 [474/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.315 [475/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.315 [476/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:39.315 [477/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.315 [478/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:39.315 [479/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:39.315 [480/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:39.315 [481/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:39.315 [482/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:39.315 [483/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.315 [484/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.315 [485/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.315 [486/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:39.315 [487/740] Linking static target drivers/librte_bus_pci.a 00:01:39.315 [488/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.315 [489/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:39.315 [490/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:39.315 [491/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:39.315 [492/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:39.578 [493/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:39.578 [494/740] Linking static target lib/librte_table.a 00:01:39.578 [495/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:39.578 [496/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:39.578 [497/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:39.578 [498/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:39.578 [499/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:39.578 [500/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:39.578 [501/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.578 [502/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:39.578 [503/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:39.578 [504/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:39.578 [505/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.578 [506/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:39.578 [507/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:39.578 [508/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:39.578 [509/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:39.578 [510/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:39.578 [511/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:39.578 [512/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:39.578 [513/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:39.578 [514/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:39.578 [515/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:39.838 [516/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:39.838 [517/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:39.838 [518/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:39.838 [519/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.838 [520/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:39.838 [521/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:39.838 [522/740] Linking static target lib/librte_cryptodev.a 00:01:39.838 [523/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:39.838 [524/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:39.838 [525/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:39.838 [526/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:39.838 [527/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:39.838 [528/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:39.838 [529/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:39.838 [530/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:39.838 [531/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:39.838 [532/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:39.838 [533/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.838 [534/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:39.838 [535/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:39.838 [536/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:39.838 [537/740] Linking static target lib/librte_ipsec.a 00:01:39.838 [538/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:39.838 [539/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:39.839 [540/740] Linking static target lib/librte_sched.a 00:01:39.839 [541/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:39.839 [542/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:39.839 [543/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.097 [544/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:40.097 [545/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:40.097 [546/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.097 [547/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.097 [548/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:40.097 [549/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:40.097 [550/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:40.097 [551/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:40.097 [552/740] Linking static target drivers/librte_mempool_ring.a 00:01:40.097 [553/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:40.097 [554/740] Linking static target lib/librte_node.a 00:01:40.097 [555/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:40.097 [556/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:40.097 [557/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:40.097 [558/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:40.097 [559/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:40.097 [560/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:40.097 [561/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:40.097 [562/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:40.097 [563/740] Linking static target lib/librte_ethdev.a 00:01:40.097 [564/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:40.097 [565/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:40.097 [566/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:40.097 [567/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:40.097 [568/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:40.097 [569/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:40.097 [570/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:40.097 [571/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:40.097 [572/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:40.097 [573/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:40.097 [574/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:40.097 [575/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:40.097 [576/740] Linking static target lib/librte_member.a 00:01:40.097 [577/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:40.097 [578/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:40.356 [579/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:40.356 [580/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:40.356 [581/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:40.356 [582/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:40.356 [583/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:40.356 [584/740] Linking static target lib/librte_port.a 00:01:40.356 [585/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:40.356 [586/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.356 [587/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.356 [588/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:40.356 [589/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:40.356 [590/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.356 [591/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:40.356 [592/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:01:40.356 [593/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:40.615 [594/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:40.615 [595/740] Linking static target lib/librte_eventdev.a 00:01:40.615 [596/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:40.615 [597/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.615 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:40.615 [599/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:40.615 [600/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.615 [601/740] Linking static target lib/librte_hash.a 00:01:40.615 [602/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:40.615 [603/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:01:40.615 [604/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:40.615 [605/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:40.615 [606/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.874 [607/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:40.874 [608/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:40.874 [609/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:40.874 [610/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:40.874 [611/740] Linking static target lib/librte_acl.a 00:01:41.134 [612/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:41.134 [613/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.393 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:41.393 [615/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:41.393 [616/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.393 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:41.652 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:41.652 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:41.911 [620/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.911 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:42.850 [622/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:42.850 [623/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:42.850 [624/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:43.109 [625/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:43.109 [626/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:43.109 [627/740] Linking static target drivers/librte_net_i40e.a 00:01:43.369 [628/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:43.369 [629/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.628 [630/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:43.628 [631/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:43.886 [632/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.144 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.418 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.678 [635/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:49.678 [636/740] Linking static target lib/librte_vhost.a 00:01:50.616 [637/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:50.616 [638/740] Linking static target lib/librte_pipeline.a 00:01:50.875 [639/740] Linking target app/dpdk-pdump 00:01:50.875 [640/740] Linking target app/dpdk-proc-info 00:01:50.875 [641/740] Linking target app/dpdk-test-cmdline 00:01:50.875 [642/740] Linking target app/dpdk-dumpcap 00:01:50.875 [643/740] Linking target app/dpdk-test-acl 00:01:50.875 [644/740] Linking target app/dpdk-test-gpudev 00:01:50.875 [645/740] Linking target app/dpdk-test-regex 00:01:50.875 [646/740] Linking target app/dpdk-test-compress-perf 00:01:50.875 [647/740] Linking target app/dpdk-test-security-perf 00:01:50.875 [648/740] Linking target app/dpdk-test-flow-perf 00:01:50.875 [649/740] Linking target app/dpdk-test-fib 00:01:50.875 [650/740] Linking target app/dpdk-test-crypto-perf 00:01:50.875 [651/740] Linking target app/dpdk-test-sad 00:01:50.875 [652/740] Linking target app/dpdk-test-bbdev 00:01:50.875 [653/740] Linking target app/dpdk-test-pipeline 00:01:50.875 [654/740] Linking target app/dpdk-test-eventdev 00:01:50.875 [655/740] Linking target app/dpdk-testpmd 00:01:51.814 [656/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.383 [657/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.383 [658/740] Linking target lib/librte_eal.so.23.0 00:01:52.383 [659/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:52.643 [660/740] Linking target lib/librte_ring.so.23.0 00:01:52.643 [661/740] Linking target lib/librte_pci.so.23.0 00:01:52.643 [662/740] Linking target lib/librte_cfgfile.so.23.0 00:01:52.643 [663/740] Linking target lib/librte_timer.so.23.0 00:01:52.643 [664/740] Linking target lib/librte_acl.so.23.0 00:01:52.643 [665/740] Linking target lib/librte_jobstats.so.23.0 00:01:52.643 [666/740] Linking target lib/librte_meter.so.23.0 00:01:52.643 [667/740] Linking target lib/librte_stack.so.23.0 00:01:52.643 [668/740] Linking target lib/librte_rawdev.so.23.0 00:01:52.643 [669/740] Linking target lib/librte_dmadev.so.23.0 00:01:52.643 [670/740] Linking target drivers/librte_bus_vdev.so.23.0 00:01:52.643 [671/740] Linking target lib/librte_graph.so.23.0 00:01:52.643 [672/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:52.643 [673/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:52.643 [674/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:52.643 [675/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:52.643 [676/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:52.643 [677/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:52.643 [678/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:52.643 [679/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:52.643 [680/740] Linking target drivers/librte_bus_pci.so.23.0 00:01:52.643 [681/740] Linking target lib/librte_rcu.so.23.0 00:01:52.643 [682/740] Linking target lib/librte_mempool.so.23.0 00:01:52.902 [683/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:52.902 [684/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:52.902 [685/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:52.902 [686/740] Linking target lib/librte_rib.so.23.0 00:01:52.902 [687/740] Linking target lib/librte_mbuf.so.23.0 00:01:52.902 [688/740] Linking target drivers/librte_mempool_ring.so.23.0 00:01:53.162 [689/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:53.162 [690/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:53.162 [691/740] Linking target lib/librte_fib.so.23.0 00:01:53.162 [692/740] Linking target lib/librte_bbdev.so.23.0 00:01:53.162 [693/740] Linking target lib/librte_gpudev.so.23.0 00:01:53.162 [694/740] Linking target lib/librte_regexdev.so.23.0 00:01:53.162 [695/740] Linking target lib/librte_net.so.23.0 00:01:53.162 [696/740] Linking target lib/librte_reorder.so.23.0 00:01:53.162 [697/740] Linking target lib/librte_compressdev.so.23.0 00:01:53.162 [698/740] Linking target lib/librte_distributor.so.23.0 00:01:53.162 [699/740] Linking target lib/librte_sched.so.23.0 00:01:53.162 [700/740] Linking target lib/librte_cryptodev.so.23.0 00:01:53.162 [701/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:53.421 [702/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:53.421 [703/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:53.421 [704/740] Linking target lib/librte_cmdline.so.23.0 00:01:53.421 [705/740] Linking target lib/librte_hash.so.23.0 00:01:53.421 [706/740] Linking target lib/librte_ethdev.so.23.0 00:01:53.421 [707/740] Linking target lib/librte_security.so.23.0 00:01:53.421 [708/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:53.421 [709/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:53.421 [710/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:53.421 [711/740] Linking target lib/librte_member.so.23.0 00:01:53.421 [712/740] Linking target lib/librte_efd.so.23.0 00:01:53.421 [713/740] Linking target lib/librte_lpm.so.23.0 00:01:53.421 [714/740] Linking target lib/librte_ipsec.so.23.0 00:01:53.680 [715/740] Linking target lib/librte_pcapng.so.23.0 00:01:53.680 [716/740] Linking target lib/librte_metrics.so.23.0 00:01:53.680 [717/740] Linking target lib/librte_gro.so.23.0 00:01:53.680 [718/740] Linking target lib/librte_gso.so.23.0 00:01:53.680 [719/740] Linking target lib/librte_bpf.so.23.0 00:01:53.680 [720/740] Linking target lib/librte_ip_frag.so.23.0 00:01:53.680 [721/740] Linking target lib/librte_power.so.23.0 00:01:53.680 [722/740] Linking target lib/librte_eventdev.so.23.0 00:01:53.680 [723/740] Linking target lib/librte_vhost.so.23.0 00:01:53.680 [724/740] Linking target drivers/librte_net_i40e.so.23.0 00:01:53.680 [725/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:53.680 [726/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:53.680 [727/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:53.680 [728/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:53.680 [729/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:53.680 [730/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:53.680 [731/740] Linking target lib/librte_node.so.23.0 00:01:53.680 [732/740] Linking target lib/librte_latencystats.so.23.0 00:01:53.680 [733/740] Linking target lib/librte_bitratestats.so.23.0 00:01:53.680 [734/740] Linking target lib/librte_pdump.so.23.0 00:01:53.680 [735/740] Linking target lib/librte_port.so.23.0 00:01:53.946 [736/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:53.946 [737/740] Linking target lib/librte_table.so.23.0 00:01:54.312 [738/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:55.693 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.952 [740/740] Linking target lib/librte_pipeline.so.23.0 00:01:55.952 06:42:17 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:01:55.952 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:55.952 [0/1] Installing files. 00:01:56.216 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.216 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:56.217 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.218 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.219 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:56.220 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:56.221 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:56.221 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.221 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:56.485 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:56.485 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:56.485 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:56.485 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:56.485 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.485 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.486 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.487 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.488 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:01:56.489 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:01:56.489 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:01:56.489 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:56.489 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:01:56.489 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:56.489 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:01:56.489 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:56.489 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:01:56.489 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:56.489 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:01:56.489 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:56.489 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:01:56.489 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:56.489 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:01:56.489 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:56.489 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.23 00:01:56.489 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:01:56.489 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:01:56.489 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:56.489 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:01:56.489 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:56.489 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:01:56.489 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:56.489 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:01:56.489 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:56.489 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:01:56.489 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:56.489 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:01:56.489 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:56.489 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:01:56.489 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:56.489 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:01:56.489 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:56.489 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:01:56.489 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:56.489 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:01:56.489 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:56.489 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:01:56.489 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:56.489 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:01:56.489 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:56.489 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:01:56.489 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:56.489 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:01:56.489 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:56.489 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:01:56.489 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:56.489 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:01:56.489 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:56.490 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:01:56.490 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:56.490 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:01:56.490 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:56.490 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:01:56.490 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:56.490 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:01:56.490 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:56.490 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:01:56.490 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:56.490 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:01:56.490 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:56.490 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:01:56.490 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:56.490 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:01:56.490 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:56.490 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.23 00:01:56.490 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:01:56.490 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:01:56.490 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:56.490 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.23 00:01:56.490 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:01:56.490 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:01:56.490 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:56.490 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:01:56.490 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:56.490 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:01:56.490 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:56.490 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:01:56.490 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:56.490 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:01:56.490 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:56.490 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:01:56.490 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:56.490 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.23 00:01:56.490 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:01:56.490 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:01:56.490 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:56.490 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:01:56.490 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:56.490 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:01:56.490 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:56.490 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:01:56.490 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:56.490 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.23 00:01:56.490 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:01:56.490 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:01:56.490 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:56.490 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.23 00:01:56.490 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:01:56.490 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:01:56.490 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:01:56.490 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:01:56.490 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:01:56.490 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:01:56.490 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:01:56.490 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:01:56.490 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:01:56.490 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:01:56.490 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:01:56.490 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:01:56.490 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:01:56.490 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:01:56.490 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:56.490 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:01:56.490 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:56.490 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.23 00:01:56.490 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:01:56.490 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:01:56.490 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:01:56.490 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:01:56.490 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:01:56.490 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:01:56.490 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:01:56.490 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:01:56.490 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:01:56.490 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:01:56.750 06:42:18 -- common/autobuild_common.sh@192 -- $ uname -s 00:01:56.750 06:42:18 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:56.750 06:42:18 -- common/autobuild_common.sh@203 -- $ cat 00:01:56.750 06:42:18 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:56.750 00:01:56.750 real 0m26.444s 00:01:56.750 user 6m38.357s 00:01:56.750 sys 2m11.871s 00:01:56.750 06:42:18 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:56.750 06:42:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.750 ************************************ 00:01:56.750 END TEST build_native_dpdk 00:01:56.750 ************************************ 00:01:56.750 06:42:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:56.750 06:42:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:56.750 06:42:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:56.750 06:42:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:56.750 06:42:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:56.750 06:42:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:56.750 06:42:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:56.750 06:42:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:01:56.750 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:57.010 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:57.010 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:57.010 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:57.269 Using 'verbs' RDMA provider 00:02:12.732 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:24.951 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:24.951 Creating mk/config.mk...done. 00:02:24.951 Creating mk/cc.flags.mk...done. 00:02:24.951 Type 'make' to build. 00:02:24.951 06:42:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:24.951 06:42:46 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:24.951 06:42:46 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:24.951 06:42:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.951 ************************************ 00:02:24.951 START TEST make 00:02:24.951 ************************************ 00:02:24.952 06:42:46 -- common/autotest_common.sh@1114 -- $ make -j112 00:02:25.211 make[1]: Nothing to be done for 'all'. 00:02:35.199 CC lib/ut_mock/mock.o 00:02:35.199 CC lib/log/log.o 00:02:35.199 CC lib/log/log_flags.o 00:02:35.199 CC lib/log/log_deprecated.o 00:02:35.199 CC lib/ut/ut.o 00:02:35.199 LIB libspdk_ut_mock.a 00:02:35.199 LIB libspdk_log.a 00:02:35.199 LIB libspdk_ut.a 00:02:35.199 SO libspdk_ut_mock.so.5.0 00:02:35.199 SO libspdk_log.so.6.1 00:02:35.199 SO libspdk_ut.so.1.0 00:02:35.199 SYMLINK libspdk_ut_mock.so 00:02:35.199 SYMLINK libspdk_log.so 00:02:35.199 SYMLINK libspdk_ut.so 00:02:35.199 CC lib/util/base64.o 00:02:35.199 CC lib/util/cpuset.o 00:02:35.199 CC lib/util/bit_array.o 00:02:35.199 CC lib/util/crc16.o 00:02:35.199 CXX lib/trace_parser/trace.o 00:02:35.199 CC lib/util/crc32.o 00:02:35.199 CC lib/util/crc32c.o 00:02:35.199 CC lib/util/crc32_ieee.o 00:02:35.199 CC lib/dma/dma.o 00:02:35.199 CC lib/util/crc64.o 00:02:35.199 CC lib/ioat/ioat.o 00:02:35.199 CC lib/util/dif.o 00:02:35.199 CC lib/util/fd.o 00:02:35.199 CC lib/util/file.o 00:02:35.199 CC lib/util/hexlify.o 00:02:35.199 CC lib/util/iov.o 00:02:35.199 CC lib/util/math.o 00:02:35.199 CC lib/util/pipe.o 00:02:35.199 CC lib/util/strerror_tls.o 00:02:35.199 CC lib/util/string.o 00:02:35.199 CC lib/util/uuid.o 00:02:35.199 CC lib/util/fd_group.o 00:02:35.199 CC lib/util/xor.o 00:02:35.199 CC lib/util/zipf.o 00:02:35.199 CC lib/vfio_user/host/vfio_user_pci.o 00:02:35.199 CC lib/vfio_user/host/vfio_user.o 00:02:35.199 LIB libspdk_dma.a 00:02:35.458 SO libspdk_dma.so.3.0 00:02:35.458 LIB libspdk_ioat.a 00:02:35.458 SYMLINK libspdk_dma.so 00:02:35.458 SO libspdk_ioat.so.6.0 00:02:35.458 SYMLINK libspdk_ioat.so 00:02:35.458 LIB libspdk_vfio_user.a 00:02:35.458 SO libspdk_vfio_user.so.4.0 00:02:35.458 LIB libspdk_util.a 00:02:35.458 SYMLINK libspdk_vfio_user.so 00:02:35.718 SO libspdk_util.so.8.0 00:02:35.718 SYMLINK libspdk_util.so 00:02:35.718 LIB libspdk_trace_parser.a 00:02:35.718 SO libspdk_trace_parser.so.4.0 00:02:35.977 SYMLINK libspdk_trace_parser.so 00:02:35.977 CC lib/env_dpdk/env.o 00:02:35.977 CC lib/env_dpdk/memory.o 00:02:35.977 CC lib/env_dpdk/pci.o 00:02:35.977 CC lib/conf/conf.o 00:02:35.977 CC lib/rdma/common.o 00:02:35.977 CC lib/env_dpdk/init.o 00:02:35.977 CC lib/vmd/vmd.o 00:02:35.977 CC lib/env_dpdk/threads.o 00:02:35.977 CC lib/rdma/rdma_verbs.o 00:02:35.977 CC lib/env_dpdk/pci_ioat.o 00:02:35.977 CC lib/env_dpdk/pci_virtio.o 00:02:35.977 CC lib/json/json_parse.o 00:02:35.977 CC lib/vmd/led.o 00:02:35.977 CC lib/env_dpdk/pci_vmd.o 00:02:35.977 CC lib/json/json_util.o 00:02:35.977 CC lib/idxd/idxd.o 00:02:35.977 CC lib/env_dpdk/pci_idxd.o 00:02:35.977 CC lib/env_dpdk/pci_event.o 00:02:35.977 CC lib/json/json_write.o 00:02:35.977 CC lib/idxd/idxd_user.o 00:02:35.977 CC lib/idxd/idxd_kernel.o 00:02:35.977 CC lib/env_dpdk/sigbus_handler.o 00:02:35.977 CC lib/env_dpdk/pci_dpdk.o 00:02:35.977 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:35.977 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:36.235 LIB libspdk_conf.a 00:02:36.235 SO libspdk_conf.so.5.0 00:02:36.235 LIB libspdk_rdma.a 00:02:36.235 LIB libspdk_json.a 00:02:36.235 SO libspdk_rdma.so.5.0 00:02:36.235 SYMLINK libspdk_conf.so 00:02:36.235 SO libspdk_json.so.5.1 00:02:36.235 SYMLINK libspdk_rdma.so 00:02:36.235 SYMLINK libspdk_json.so 00:02:36.495 LIB libspdk_idxd.a 00:02:36.495 LIB libspdk_vmd.a 00:02:36.495 SO libspdk_idxd.so.11.0 00:02:36.495 SO libspdk_vmd.so.5.0 00:02:36.495 SYMLINK libspdk_idxd.so 00:02:36.495 SYMLINK libspdk_vmd.so 00:02:36.495 CC lib/jsonrpc/jsonrpc_server.o 00:02:36.495 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:36.495 CC lib/jsonrpc/jsonrpc_client.o 00:02:36.495 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.754 LIB libspdk_jsonrpc.a 00:02:36.754 SO libspdk_jsonrpc.so.5.1 00:02:37.014 SYMLINK libspdk_jsonrpc.so 00:02:37.014 LIB libspdk_env_dpdk.a 00:02:37.014 SO libspdk_env_dpdk.so.13.0 00:02:37.014 CC lib/rpc/rpc.o 00:02:37.014 SYMLINK libspdk_env_dpdk.so 00:02:37.274 LIB libspdk_rpc.a 00:02:37.274 SO libspdk_rpc.so.5.0 00:02:37.274 SYMLINK libspdk_rpc.so 00:02:37.534 CC lib/trace/trace.o 00:02:37.534 CC lib/trace/trace_flags.o 00:02:37.534 CC lib/trace/trace_rpc.o 00:02:37.534 CC lib/sock/sock.o 00:02:37.534 CC lib/notify/notify.o 00:02:37.534 CC lib/sock/sock_rpc.o 00:02:37.534 CC lib/notify/notify_rpc.o 00:02:37.793 LIB libspdk_notify.a 00:02:37.793 LIB libspdk_trace.a 00:02:37.793 SO libspdk_notify.so.5.0 00:02:37.793 SO libspdk_trace.so.9.0 00:02:37.793 SYMLINK libspdk_notify.so 00:02:38.053 SYMLINK libspdk_trace.so 00:02:38.053 LIB libspdk_sock.a 00:02:38.053 SO libspdk_sock.so.8.0 00:02:38.053 SYMLINK libspdk_sock.so 00:02:38.053 CC lib/thread/thread.o 00:02:38.053 CC lib/thread/iobuf.o 00:02:38.311 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:38.311 CC lib/nvme/nvme_ctrlr.o 00:02:38.311 CC lib/nvme/nvme_fabric.o 00:02:38.311 CC lib/nvme/nvme_ns_cmd.o 00:02:38.311 CC lib/nvme/nvme_ns.o 00:02:38.311 CC lib/nvme/nvme_pcie_common.o 00:02:38.311 CC lib/nvme/nvme_pcie.o 00:02:38.311 CC lib/nvme/nvme_qpair.o 00:02:38.311 CC lib/nvme/nvme.o 00:02:38.311 CC lib/nvme/nvme_quirks.o 00:02:38.311 CC lib/nvme/nvme_transport.o 00:02:38.311 CC lib/nvme/nvme_discovery.o 00:02:38.311 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:38.311 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:38.311 CC lib/nvme/nvme_tcp.o 00:02:38.311 CC lib/nvme/nvme_io_msg.o 00:02:38.311 CC lib/nvme/nvme_opal.o 00:02:38.311 CC lib/nvme/nvme_poll_group.o 00:02:38.311 CC lib/nvme/nvme_zns.o 00:02:38.311 CC lib/nvme/nvme_cuse.o 00:02:38.311 CC lib/nvme/nvme_vfio_user.o 00:02:38.311 CC lib/nvme/nvme_rdma.o 00:02:39.248 LIB libspdk_thread.a 00:02:39.248 SO libspdk_thread.so.9.0 00:02:39.508 SYMLINK libspdk_thread.so 00:02:39.767 CC lib/accel/accel.o 00:02:39.767 CC lib/accel/accel_rpc.o 00:02:39.767 CC lib/accel/accel_sw.o 00:02:39.767 CC lib/init/json_config.o 00:02:39.767 CC lib/blob/blobstore.o 00:02:39.767 CC lib/init/subsystem.o 00:02:39.767 CC lib/blob/request.o 00:02:39.767 CC lib/init/subsystem_rpc.o 00:02:39.767 CC lib/init/rpc.o 00:02:39.767 CC lib/blob/zeroes.o 00:02:39.767 CC lib/virtio/virtio.o 00:02:39.767 CC lib/blob/blob_bs_dev.o 00:02:39.767 CC lib/virtio/virtio_vhost_user.o 00:02:39.767 CC lib/virtio/virtio_vfio_user.o 00:02:39.767 CC lib/virtio/virtio_pci.o 00:02:39.767 LIB libspdk_nvme.a 00:02:39.767 LIB libspdk_init.a 00:02:40.026 SO libspdk_init.so.4.0 00:02:40.026 LIB libspdk_virtio.a 00:02:40.026 SO libspdk_nvme.so.12.0 00:02:40.026 SO libspdk_virtio.so.6.0 00:02:40.026 SYMLINK libspdk_init.so 00:02:40.026 SYMLINK libspdk_virtio.so 00:02:40.026 SYMLINK libspdk_nvme.so 00:02:40.285 CC lib/event/app.o 00:02:40.285 CC lib/event/reactor.o 00:02:40.285 CC lib/event/log_rpc.o 00:02:40.285 CC lib/event/app_rpc.o 00:02:40.285 CC lib/event/scheduler_static.o 00:02:40.285 LIB libspdk_accel.a 00:02:40.285 SO libspdk_accel.so.14.0 00:02:40.545 SYMLINK libspdk_accel.so 00:02:40.545 LIB libspdk_event.a 00:02:40.545 SO libspdk_event.so.12.0 00:02:40.545 SYMLINK libspdk_event.so 00:02:40.804 CC lib/bdev/bdev.o 00:02:40.804 CC lib/bdev/bdev_rpc.o 00:02:40.804 CC lib/bdev/bdev_zone.o 00:02:40.804 CC lib/bdev/part.o 00:02:40.804 CC lib/bdev/scsi_nvme.o 00:02:41.742 LIB libspdk_blob.a 00:02:41.742 SO libspdk_blob.so.10.1 00:02:41.742 SYMLINK libspdk_blob.so 00:02:42.001 CC lib/blobfs/blobfs.o 00:02:42.001 CC lib/blobfs/tree.o 00:02:42.001 CC lib/lvol/lvol.o 00:02:42.570 LIB libspdk_bdev.a 00:02:42.570 SO libspdk_bdev.so.14.0 00:02:42.570 LIB libspdk_blobfs.a 00:02:42.570 LIB libspdk_lvol.a 00:02:42.570 SO libspdk_blobfs.so.9.0 00:02:42.570 SO libspdk_lvol.so.9.1 00:02:42.570 SYMLINK libspdk_bdev.so 00:02:42.570 SYMLINK libspdk_blobfs.so 00:02:42.570 SYMLINK libspdk_lvol.so 00:02:42.829 CC lib/nvmf/ctrlr.o 00:02:42.829 CC lib/nvmf/ctrlr_discovery.o 00:02:42.829 CC lib/nvmf/ctrlr_bdev.o 00:02:42.829 CC lib/nvmf/subsystem.o 00:02:42.829 CC lib/nvmf/nvmf.o 00:02:42.829 CC lib/nvmf/nvmf_rpc.o 00:02:42.829 CC lib/nvmf/transport.o 00:02:42.829 CC lib/ublk/ublk.o 00:02:42.829 CC lib/nvmf/tcp.o 00:02:42.829 CC lib/ublk/ublk_rpc.o 00:02:42.829 CC lib/nvmf/rdma.o 00:02:42.829 CC lib/scsi/dev.o 00:02:42.829 CC lib/nbd/nbd.o 00:02:42.829 CC lib/scsi/lun.o 00:02:42.829 CC lib/nbd/nbd_rpc.o 00:02:42.829 CC lib/ftl/ftl_core.o 00:02:42.829 CC lib/scsi/port.o 00:02:42.829 CC lib/scsi/scsi.o 00:02:42.829 CC lib/ftl/ftl_init.o 00:02:42.829 CC lib/ftl/ftl_layout.o 00:02:42.829 CC lib/scsi/scsi_bdev.o 00:02:42.829 CC lib/scsi/scsi_pr.o 00:02:42.829 CC lib/ftl/ftl_debug.o 00:02:42.829 CC lib/ftl/ftl_io.o 00:02:42.829 CC lib/scsi/scsi_rpc.o 00:02:42.829 CC lib/ftl/ftl_sb.o 00:02:42.829 CC lib/scsi/task.o 00:02:42.829 CC lib/ftl/ftl_l2p.o 00:02:42.829 CC lib/ftl/ftl_nv_cache.o 00:02:42.829 CC lib/ftl/ftl_l2p_flat.o 00:02:42.829 CC lib/ftl/ftl_band.o 00:02:42.829 CC lib/ftl/ftl_band_ops.o 00:02:42.829 CC lib/ftl/ftl_writer.o 00:02:42.829 CC lib/ftl/ftl_rq.o 00:02:42.829 CC lib/ftl/ftl_reloc.o 00:02:42.829 CC lib/ftl/ftl_p2l.o 00:02:42.829 CC lib/ftl/ftl_l2p_cache.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:42.829 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:42.829 CC lib/ftl/utils/ftl_conf.o 00:02:42.829 CC lib/ftl/utils/ftl_md.o 00:02:42.829 CC lib/ftl/utils/ftl_mempool.o 00:02:42.829 CC lib/ftl/utils/ftl_bitmap.o 00:02:42.829 CC lib/ftl/utils/ftl_property.o 00:02:42.829 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:42.829 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:42.829 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:42.829 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:42.829 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:42.829 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:42.829 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:42.829 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:42.829 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:42.829 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:42.829 CC lib/ftl/base/ftl_base_dev.o 00:02:42.829 CC lib/ftl/ftl_trace.o 00:02:42.829 CC lib/ftl/base/ftl_base_bdev.o 00:02:43.396 LIB libspdk_nbd.a 00:02:43.396 SO libspdk_nbd.so.6.0 00:02:43.396 LIB libspdk_scsi.a 00:02:43.396 SYMLINK libspdk_nbd.so 00:02:43.396 SO libspdk_scsi.so.8.0 00:02:43.396 LIB libspdk_ublk.a 00:02:43.654 SO libspdk_ublk.so.2.0 00:02:43.654 SYMLINK libspdk_scsi.so 00:02:43.654 SYMLINK libspdk_ublk.so 00:02:43.654 LIB libspdk_ftl.a 00:02:43.913 CC lib/iscsi/conn.o 00:02:43.913 CC lib/iscsi/md5.o 00:02:43.913 CC lib/iscsi/init_grp.o 00:02:43.913 CC lib/iscsi/iscsi.o 00:02:43.913 CC lib/vhost/vhost.o 00:02:43.913 CC lib/vhost/vhost_rpc.o 00:02:43.913 CC lib/iscsi/param.o 00:02:43.913 CC lib/vhost/vhost_scsi.o 00:02:43.913 CC lib/iscsi/portal_grp.o 00:02:43.913 CC lib/vhost/vhost_blk.o 00:02:43.913 CC lib/iscsi/tgt_node.o 00:02:43.913 CC lib/vhost/rte_vhost_user.o 00:02:43.913 CC lib/iscsi/iscsi_subsystem.o 00:02:43.913 CC lib/iscsi/iscsi_rpc.o 00:02:43.913 CC lib/iscsi/task.o 00:02:43.913 SO libspdk_ftl.so.8.0 00:02:44.203 SYMLINK libspdk_ftl.so 00:02:44.462 LIB libspdk_nvmf.a 00:02:44.462 SO libspdk_nvmf.so.17.0 00:02:44.722 LIB libspdk_vhost.a 00:02:44.722 SO libspdk_vhost.so.7.1 00:02:44.722 SYMLINK libspdk_nvmf.so 00:02:44.722 SYMLINK libspdk_vhost.so 00:02:44.722 LIB libspdk_iscsi.a 00:02:44.722 SO libspdk_iscsi.so.7.0 00:02:44.982 SYMLINK libspdk_iscsi.so 00:02:45.242 CC module/env_dpdk/env_dpdk_rpc.o 00:02:45.501 CC module/sock/posix/posix.o 00:02:45.501 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:45.501 CC module/accel/iaa/accel_iaa.o 00:02:45.501 CC module/blob/bdev/blob_bdev.o 00:02:45.501 CC module/accel/error/accel_error.o 00:02:45.501 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:45.501 CC module/accel/error/accel_error_rpc.o 00:02:45.501 CC module/accel/ioat/accel_ioat.o 00:02:45.501 CC module/accel/iaa/accel_iaa_rpc.o 00:02:45.501 CC module/accel/ioat/accel_ioat_rpc.o 00:02:45.501 CC module/accel/dsa/accel_dsa.o 00:02:45.501 CC module/scheduler/gscheduler/gscheduler.o 00:02:45.501 CC module/accel/dsa/accel_dsa_rpc.o 00:02:45.501 LIB libspdk_env_dpdk_rpc.a 00:02:45.501 SO libspdk_env_dpdk_rpc.so.5.0 00:02:45.501 SYMLINK libspdk_env_dpdk_rpc.so 00:02:45.760 LIB libspdk_scheduler_dpdk_governor.a 00:02:45.760 LIB libspdk_scheduler_gscheduler.a 00:02:45.760 LIB libspdk_accel_ioat.a 00:02:45.760 LIB libspdk_accel_error.a 00:02:45.760 LIB libspdk_accel_iaa.a 00:02:45.760 LIB libspdk_scheduler_dynamic.a 00:02:45.760 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:45.760 SO libspdk_scheduler_gscheduler.so.3.0 00:02:45.760 SO libspdk_accel_iaa.so.2.0 00:02:45.760 SO libspdk_accel_ioat.so.5.0 00:02:45.760 SO libspdk_accel_error.so.1.0 00:02:45.760 SO libspdk_scheduler_dynamic.so.3.0 00:02:45.760 LIB libspdk_accel_dsa.a 00:02:45.760 LIB libspdk_blob_bdev.a 00:02:45.760 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:45.760 SYMLINK libspdk_scheduler_gscheduler.so 00:02:45.760 SO libspdk_accel_dsa.so.4.0 00:02:45.760 SYMLINK libspdk_accel_ioat.so 00:02:45.760 SYMLINK libspdk_accel_iaa.so 00:02:45.760 SO libspdk_blob_bdev.so.10.1 00:02:45.760 SYMLINK libspdk_scheduler_dynamic.so 00:02:45.760 SYMLINK libspdk_accel_error.so 00:02:45.760 SYMLINK libspdk_accel_dsa.so 00:02:45.760 SYMLINK libspdk_blob_bdev.so 00:02:46.021 LIB libspdk_sock_posix.a 00:02:46.021 SO libspdk_sock_posix.so.5.0 00:02:46.021 SYMLINK libspdk_sock_posix.so 00:02:46.280 CC module/bdev/nvme/bdev_nvme.o 00:02:46.280 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:46.280 CC module/bdev/nvme/nvme_rpc.o 00:02:46.280 CC module/bdev/nvme/vbdev_opal.o 00:02:46.280 CC module/bdev/gpt/gpt.o 00:02:46.280 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:46.280 CC module/bdev/gpt/vbdev_gpt.o 00:02:46.280 CC module/bdev/nvme/bdev_mdns_client.o 00:02:46.280 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:46.280 CC module/bdev/delay/vbdev_delay.o 00:02:46.280 CC module/bdev/error/vbdev_error_rpc.o 00:02:46.280 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:46.280 CC module/bdev/null/bdev_null.o 00:02:46.280 CC module/bdev/error/vbdev_error.o 00:02:46.280 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:46.280 CC module/bdev/null/bdev_null_rpc.o 00:02:46.280 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:46.280 CC module/bdev/split/vbdev_split_rpc.o 00:02:46.280 CC module/blobfs/bdev/blobfs_bdev.o 00:02:46.280 CC module/bdev/split/vbdev_split.o 00:02:46.280 CC module/bdev/passthru/vbdev_passthru.o 00:02:46.280 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:46.280 CC module/bdev/iscsi/bdev_iscsi.o 00:02:46.280 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:46.280 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:46.280 CC module/bdev/aio/bdev_aio.o 00:02:46.280 CC module/bdev/lvol/vbdev_lvol.o 00:02:46.280 CC module/bdev/aio/bdev_aio_rpc.o 00:02:46.280 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:46.280 CC module/bdev/raid/bdev_raid_rpc.o 00:02:46.280 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:46.280 CC module/bdev/malloc/bdev_malloc.o 00:02:46.280 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:46.280 CC module/bdev/raid/bdev_raid.o 00:02:46.280 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:46.280 CC module/bdev/raid/bdev_raid_sb.o 00:02:46.280 CC module/bdev/raid/raid0.o 00:02:46.280 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:46.280 CC module/bdev/raid/concat.o 00:02:46.281 CC module/bdev/ftl/bdev_ftl.o 00:02:46.281 CC module/bdev/raid/raid1.o 00:02:46.281 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:46.281 LIB libspdk_blobfs_bdev.a 00:02:46.540 SO libspdk_blobfs_bdev.so.5.0 00:02:46.540 LIB libspdk_bdev_split.a 00:02:46.540 LIB libspdk_bdev_null.a 00:02:46.540 LIB libspdk_bdev_gpt.a 00:02:46.540 SYMLINK libspdk_blobfs_bdev.so 00:02:46.540 LIB libspdk_bdev_error.a 00:02:46.540 SO libspdk_bdev_split.so.5.0 00:02:46.540 SO libspdk_bdev_null.so.5.0 00:02:46.540 LIB libspdk_bdev_ftl.a 00:02:46.540 LIB libspdk_bdev_passthru.a 00:02:46.540 SO libspdk_bdev_gpt.so.5.0 00:02:46.540 LIB libspdk_bdev_zone_block.a 00:02:46.540 LIB libspdk_bdev_delay.a 00:02:46.540 SO libspdk_bdev_error.so.5.0 00:02:46.540 LIB libspdk_bdev_aio.a 00:02:46.540 SYMLINK libspdk_bdev_split.so 00:02:46.540 LIB libspdk_bdev_iscsi.a 00:02:46.540 SO libspdk_bdev_passthru.so.5.0 00:02:46.540 SO libspdk_bdev_ftl.so.5.0 00:02:46.540 SO libspdk_bdev_zone_block.so.5.0 00:02:46.540 SO libspdk_bdev_delay.so.5.0 00:02:46.540 SYMLINK libspdk_bdev_gpt.so 00:02:46.540 LIB libspdk_bdev_malloc.a 00:02:46.540 SYMLINK libspdk_bdev_null.so 00:02:46.540 SO libspdk_bdev_iscsi.so.5.0 00:02:46.540 SO libspdk_bdev_aio.so.5.0 00:02:46.540 SYMLINK libspdk_bdev_error.so 00:02:46.540 SO libspdk_bdev_malloc.so.5.0 00:02:46.540 SYMLINK libspdk_bdev_passthru.so 00:02:46.540 SYMLINK libspdk_bdev_ftl.so 00:02:46.540 SYMLINK libspdk_bdev_iscsi.so 00:02:46.540 SYMLINK libspdk_bdev_zone_block.so 00:02:46.540 SYMLINK libspdk_bdev_delay.so 00:02:46.540 SYMLINK libspdk_bdev_aio.so 00:02:46.540 LIB libspdk_bdev_lvol.a 00:02:46.540 LIB libspdk_bdev_virtio.a 00:02:46.800 SYMLINK libspdk_bdev_malloc.so 00:02:46.800 SO libspdk_bdev_lvol.so.5.0 00:02:46.800 SO libspdk_bdev_virtio.so.5.0 00:02:46.800 SYMLINK libspdk_bdev_lvol.so 00:02:46.800 SYMLINK libspdk_bdev_virtio.so 00:02:46.800 LIB libspdk_bdev_raid.a 00:02:47.059 SO libspdk_bdev_raid.so.5.0 00:02:47.059 SYMLINK libspdk_bdev_raid.so 00:02:47.627 LIB libspdk_bdev_nvme.a 00:02:47.887 SO libspdk_bdev_nvme.so.6.0 00:02:47.887 SYMLINK libspdk_bdev_nvme.so 00:02:48.456 CC module/event/subsystems/vmd/vmd.o 00:02:48.456 CC module/event/subsystems/iobuf/iobuf.o 00:02:48.456 CC module/event/subsystems/scheduler/scheduler.o 00:02:48.456 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:48.456 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:48.456 CC module/event/subsystems/sock/sock.o 00:02:48.456 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:48.456 LIB libspdk_event_vmd.a 00:02:48.456 LIB libspdk_event_sock.a 00:02:48.456 LIB libspdk_event_scheduler.a 00:02:48.456 LIB libspdk_event_vhost_blk.a 00:02:48.456 LIB libspdk_event_iobuf.a 00:02:48.456 SO libspdk_event_sock.so.4.0 00:02:48.456 SO libspdk_event_vmd.so.5.0 00:02:48.456 SO libspdk_event_scheduler.so.3.0 00:02:48.456 SO libspdk_event_vhost_blk.so.2.0 00:02:48.456 SO libspdk_event_iobuf.so.2.0 00:02:48.716 SYMLINK libspdk_event_sock.so 00:02:48.716 SYMLINK libspdk_event_scheduler.so 00:02:48.716 SYMLINK libspdk_event_vmd.so 00:02:48.716 SYMLINK libspdk_event_vhost_blk.so 00:02:48.716 SYMLINK libspdk_event_iobuf.so 00:02:48.976 CC module/event/subsystems/accel/accel.o 00:02:48.976 LIB libspdk_event_accel.a 00:02:48.976 SO libspdk_event_accel.so.5.0 00:02:49.235 SYMLINK libspdk_event_accel.so 00:02:49.235 CC module/event/subsystems/bdev/bdev.o 00:02:49.494 LIB libspdk_event_bdev.a 00:02:49.494 SO libspdk_event_bdev.so.5.0 00:02:49.494 SYMLINK libspdk_event_bdev.so 00:02:49.753 CC module/event/subsystems/scsi/scsi.o 00:02:49.753 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:49.753 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:49.753 CC module/event/subsystems/ublk/ublk.o 00:02:49.753 CC module/event/subsystems/nbd/nbd.o 00:02:50.012 LIB libspdk_event_nbd.a 00:02:50.012 LIB libspdk_event_ublk.a 00:02:50.012 LIB libspdk_event_scsi.a 00:02:50.012 SO libspdk_event_nbd.so.5.0 00:02:50.012 SO libspdk_event_ublk.so.2.0 00:02:50.012 SO libspdk_event_scsi.so.5.0 00:02:50.012 LIB libspdk_event_nvmf.a 00:02:50.012 SYMLINK libspdk_event_nbd.so 00:02:50.012 SYMLINK libspdk_event_ublk.so 00:02:50.012 SO libspdk_event_nvmf.so.5.0 00:02:50.012 SYMLINK libspdk_event_scsi.so 00:02:50.271 SYMLINK libspdk_event_nvmf.so 00:02:50.272 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:50.272 CC module/event/subsystems/iscsi/iscsi.o 00:02:50.531 LIB libspdk_event_vhost_scsi.a 00:02:50.531 LIB libspdk_event_iscsi.a 00:02:50.531 SO libspdk_event_vhost_scsi.so.2.0 00:02:50.531 SO libspdk_event_iscsi.so.5.0 00:02:50.531 SYMLINK libspdk_event_vhost_scsi.so 00:02:50.531 SYMLINK libspdk_event_iscsi.so 00:02:50.789 SO libspdk.so.5.0 00:02:50.789 SYMLINK libspdk.so 00:02:51.051 CC app/spdk_lspci/spdk_lspci.o 00:02:51.051 CC app/trace_record/trace_record.o 00:02:51.051 CC app/spdk_nvme_identify/identify.o 00:02:51.051 CXX app/trace/trace.o 00:02:51.051 CC app/spdk_nvme_discover/discovery_aer.o 00:02:51.051 CC test/rpc_client/rpc_client_test.o 00:02:51.051 CC app/spdk_nvme_perf/perf.o 00:02:51.051 TEST_HEADER include/spdk/accel.h 00:02:51.051 TEST_HEADER include/spdk/accel_module.h 00:02:51.051 CC app/spdk_top/spdk_top.o 00:02:51.051 TEST_HEADER include/spdk/assert.h 00:02:51.051 TEST_HEADER include/spdk/barrier.h 00:02:51.051 TEST_HEADER include/spdk/base64.h 00:02:51.051 TEST_HEADER include/spdk/bdev.h 00:02:51.051 TEST_HEADER include/spdk/bdev_module.h 00:02:51.051 TEST_HEADER include/spdk/bit_array.h 00:02:51.051 TEST_HEADER include/spdk/bdev_zone.h 00:02:51.051 TEST_HEADER include/spdk/bit_pool.h 00:02:51.051 TEST_HEADER include/spdk/blob_bdev.h 00:02:51.051 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:51.051 TEST_HEADER include/spdk/blobfs.h 00:02:51.051 TEST_HEADER include/spdk/blob.h 00:02:51.051 TEST_HEADER include/spdk/config.h 00:02:51.051 TEST_HEADER include/spdk/conf.h 00:02:51.051 TEST_HEADER include/spdk/cpuset.h 00:02:51.051 TEST_HEADER include/spdk/crc16.h 00:02:51.051 TEST_HEADER include/spdk/crc32.h 00:02:51.051 TEST_HEADER include/spdk/crc64.h 00:02:51.051 TEST_HEADER include/spdk/dif.h 00:02:51.051 TEST_HEADER include/spdk/dma.h 00:02:51.051 TEST_HEADER include/spdk/endian.h 00:02:51.051 TEST_HEADER include/spdk/env_dpdk.h 00:02:51.051 TEST_HEADER include/spdk/event.h 00:02:51.051 TEST_HEADER include/spdk/env.h 00:02:51.051 TEST_HEADER include/spdk/fd_group.h 00:02:51.051 TEST_HEADER include/spdk/fd.h 00:02:51.051 TEST_HEADER include/spdk/ftl.h 00:02:51.051 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:51.051 TEST_HEADER include/spdk/file.h 00:02:51.051 CC app/vhost/vhost.o 00:02:51.051 TEST_HEADER include/spdk/hexlify.h 00:02:51.051 TEST_HEADER include/spdk/gpt_spec.h 00:02:51.051 TEST_HEADER include/spdk/histogram_data.h 00:02:51.051 TEST_HEADER include/spdk/idxd.h 00:02:51.051 TEST_HEADER include/spdk/idxd_spec.h 00:02:51.051 TEST_HEADER include/spdk/init.h 00:02:51.051 TEST_HEADER include/spdk/ioat_spec.h 00:02:51.051 TEST_HEADER include/spdk/ioat.h 00:02:51.051 TEST_HEADER include/spdk/iscsi_spec.h 00:02:51.051 TEST_HEADER include/spdk/json.h 00:02:51.051 CC app/nvmf_tgt/nvmf_main.o 00:02:51.051 TEST_HEADER include/spdk/likely.h 00:02:51.051 TEST_HEADER include/spdk/log.h 00:02:51.051 TEST_HEADER include/spdk/jsonrpc.h 00:02:51.051 TEST_HEADER include/spdk/lvol.h 00:02:51.051 TEST_HEADER include/spdk/memory.h 00:02:51.051 TEST_HEADER include/spdk/mmio.h 00:02:51.051 TEST_HEADER include/spdk/nbd.h 00:02:51.051 TEST_HEADER include/spdk/notify.h 00:02:51.051 TEST_HEADER include/spdk/nvme.h 00:02:51.051 CC app/spdk_dd/spdk_dd.o 00:02:51.051 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:51.051 TEST_HEADER include/spdk/nvme_intel.h 00:02:51.051 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:51.051 TEST_HEADER include/spdk/nvme_spec.h 00:02:51.051 CC app/iscsi_tgt/iscsi_tgt.o 00:02:51.051 TEST_HEADER include/spdk/nvme_zns.h 00:02:51.051 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:51.051 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:51.051 TEST_HEADER include/spdk/nvmf_spec.h 00:02:51.051 TEST_HEADER include/spdk/nvmf_transport.h 00:02:51.051 TEST_HEADER include/spdk/nvmf.h 00:02:51.051 TEST_HEADER include/spdk/opal.h 00:02:51.051 TEST_HEADER include/spdk/opal_spec.h 00:02:51.051 TEST_HEADER include/spdk/pci_ids.h 00:02:51.051 CC app/spdk_tgt/spdk_tgt.o 00:02:51.051 TEST_HEADER include/spdk/pipe.h 00:02:51.051 TEST_HEADER include/spdk/queue.h 00:02:51.051 TEST_HEADER include/spdk/rpc.h 00:02:51.051 TEST_HEADER include/spdk/reduce.h 00:02:51.051 TEST_HEADER include/spdk/scheduler.h 00:02:51.051 TEST_HEADER include/spdk/scsi_spec.h 00:02:51.051 TEST_HEADER include/spdk/scsi.h 00:02:51.051 TEST_HEADER include/spdk/sock.h 00:02:51.051 TEST_HEADER include/spdk/stdinc.h 00:02:51.051 TEST_HEADER include/spdk/string.h 00:02:51.051 TEST_HEADER include/spdk/thread.h 00:02:51.051 TEST_HEADER include/spdk/trace.h 00:02:51.051 TEST_HEADER include/spdk/trace_parser.h 00:02:51.051 TEST_HEADER include/spdk/ublk.h 00:02:51.051 TEST_HEADER include/spdk/tree.h 00:02:51.051 TEST_HEADER include/spdk/util.h 00:02:51.051 TEST_HEADER include/spdk/uuid.h 00:02:51.051 TEST_HEADER include/spdk/version.h 00:02:51.051 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:51.051 TEST_HEADER include/spdk/vhost.h 00:02:51.051 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:51.051 TEST_HEADER include/spdk/vmd.h 00:02:51.051 TEST_HEADER include/spdk/xor.h 00:02:51.051 TEST_HEADER include/spdk/zipf.h 00:02:51.051 CC examples/accel/perf/accel_perf.o 00:02:51.051 CXX test/cpp_headers/accel.o 00:02:51.051 CXX test/cpp_headers/assert.o 00:02:51.051 CXX test/cpp_headers/accel_module.o 00:02:51.051 CXX test/cpp_headers/barrier.o 00:02:51.051 CXX test/cpp_headers/bdev.o 00:02:51.051 CXX test/cpp_headers/base64.o 00:02:51.051 CXX test/cpp_headers/bdev_module.o 00:02:51.051 CXX test/cpp_headers/bdev_zone.o 00:02:51.051 CXX test/cpp_headers/bit_array.o 00:02:51.051 CXX test/cpp_headers/bit_pool.o 00:02:51.051 CXX test/cpp_headers/blobfs_bdev.o 00:02:51.051 CXX test/cpp_headers/blob_bdev.o 00:02:51.051 CC examples/util/zipf/zipf.o 00:02:51.052 CXX test/cpp_headers/blobfs.o 00:02:51.052 CXX test/cpp_headers/blob.o 00:02:51.052 CXX test/cpp_headers/conf.o 00:02:51.052 CXX test/cpp_headers/config.o 00:02:51.052 CC examples/vmd/lsvmd/lsvmd.o 00:02:51.052 CXX test/cpp_headers/crc16.o 00:02:51.052 CXX test/cpp_headers/cpuset.o 00:02:51.052 CXX test/cpp_headers/crc32.o 00:02:51.052 CXX test/cpp_headers/crc64.o 00:02:51.052 CXX test/cpp_headers/dif.o 00:02:51.052 CXX test/cpp_headers/endian.o 00:02:51.052 CXX test/cpp_headers/dma.o 00:02:51.052 CXX test/cpp_headers/env_dpdk.o 00:02:51.052 CXX test/cpp_headers/env.o 00:02:51.052 CXX test/cpp_headers/event.o 00:02:51.052 CXX test/cpp_headers/fd_group.o 00:02:51.052 CC test/env/memory/memory_ut.o 00:02:51.052 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:51.052 CXX test/cpp_headers/fd.o 00:02:51.052 CC test/app/histogram_perf/histogram_perf.o 00:02:51.052 CXX test/cpp_headers/ftl.o 00:02:51.052 CC test/env/pci/pci_ut.o 00:02:51.052 CXX test/cpp_headers/file.o 00:02:51.052 CXX test/cpp_headers/hexlify.o 00:02:51.052 CXX test/cpp_headers/gpt_spec.o 00:02:51.052 CXX test/cpp_headers/histogram_data.o 00:02:51.052 CC examples/ioat/perf/perf.o 00:02:51.052 CC test/nvme/sgl/sgl.o 00:02:51.052 CC test/nvme/e2edp/nvme_dp.o 00:02:51.052 CC test/nvme/err_injection/err_injection.o 00:02:51.052 CC app/fio/nvme/fio_plugin.o 00:02:51.052 CC test/nvme/startup/startup.o 00:02:51.052 CC examples/sock/hello_world/hello_sock.o 00:02:51.052 CXX test/cpp_headers/idxd.o 00:02:51.052 CC test/env/vtophys/vtophys.o 00:02:51.052 CXX test/cpp_headers/idxd_spec.o 00:02:51.052 CC test/nvme/reset/reset.o 00:02:51.052 CXX test/cpp_headers/init.o 00:02:51.052 CXX test/cpp_headers/ioat.o 00:02:51.052 CC examples/nvme/arbitration/arbitration.o 00:02:51.052 CC examples/nvme/hotplug/hotplug.o 00:02:51.052 CC examples/nvme/reconnect/reconnect.o 00:02:51.052 CC examples/idxd/perf/perf.o 00:02:51.052 CC test/app/jsoncat/jsoncat.o 00:02:51.052 CC test/thread/poller_perf/poller_perf.o 00:02:51.052 CC examples/nvme/abort/abort.o 00:02:51.052 CC test/nvme/simple_copy/simple_copy.o 00:02:51.052 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:51.052 CC test/nvme/connect_stress/connect_stress.o 00:02:51.052 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:51.052 CC test/app/stub/stub.o 00:02:51.052 CC examples/nvme/hello_world/hello_world.o 00:02:51.319 CC test/nvme/aer/aer.o 00:02:51.319 CC test/nvme/reserve/reserve.o 00:02:51.319 CC examples/vmd/led/led.o 00:02:51.319 CC test/nvme/overhead/overhead.o 00:02:51.319 CC test/nvme/cuse/cuse.o 00:02:51.319 CC test/event/reactor_perf/reactor_perf.o 00:02:51.319 CC examples/ioat/verify/verify.o 00:02:51.319 CC test/nvme/compliance/nvme_compliance.o 00:02:51.319 CC test/event/reactor/reactor.o 00:02:51.319 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:51.319 CC test/event/event_perf/event_perf.o 00:02:51.319 CC test/nvme/fdp/fdp.o 00:02:51.319 CC test/nvme/boot_partition/boot_partition.o 00:02:51.319 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:51.319 CC test/nvme/fused_ordering/fused_ordering.o 00:02:51.319 CC examples/nvmf/nvmf/nvmf.o 00:02:51.319 CC test/event/app_repeat/app_repeat.o 00:02:51.319 CC examples/blob/cli/blobcli.o 00:02:51.320 CC test/dma/test_dma/test_dma.o 00:02:51.320 CC examples/bdev/hello_world/hello_bdev.o 00:02:51.320 CC examples/blob/hello_world/hello_blob.o 00:02:51.320 CC app/fio/bdev/fio_plugin.o 00:02:51.320 CC examples/bdev/bdevperf/bdevperf.o 00:02:51.320 CC test/event/scheduler/scheduler.o 00:02:51.320 CC test/app/bdev_svc/bdev_svc.o 00:02:51.320 CC test/bdev/bdevio/bdevio.o 00:02:51.320 CC test/blobfs/mkfs/mkfs.o 00:02:51.320 CC test/accel/dif/dif.o 00:02:51.320 CC examples/thread/thread/thread_ex.o 00:02:51.320 LINK spdk_lspci 00:02:51.320 CC test/env/mem_callbacks/mem_callbacks.o 00:02:51.320 CC test/lvol/esnap/esnap.o 00:02:51.320 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:51.582 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:51.582 LINK rpc_client_test 00:02:51.582 LINK spdk_nvme_discover 00:02:51.582 LINK interrupt_tgt 00:02:51.582 LINK vhost 00:02:51.582 LINK nvmf_tgt 00:02:51.582 LINK lsvmd 00:02:51.582 LINK histogram_perf 00:02:51.582 LINK zipf 00:02:51.582 LINK spdk_trace_record 00:02:51.582 LINK vtophys 00:02:51.582 LINK jsoncat 00:02:51.582 LINK iscsi_tgt 00:02:51.582 LINK reactor 00:02:51.582 LINK poller_perf 00:02:51.582 LINK env_dpdk_post_init 00:02:51.582 LINK event_perf 00:02:51.582 LINK reactor_perf 00:02:51.582 LINK spdk_tgt 00:02:51.847 LINK led 00:02:51.847 LINK err_injection 00:02:51.847 LINK pmr_persistence 00:02:51.847 LINK boot_partition 00:02:51.847 LINK app_repeat 00:02:51.847 LINK startup 00:02:51.847 LINK doorbell_aers 00:02:51.847 LINK stub 00:02:51.847 LINK cmb_copy 00:02:51.847 LINK connect_stress 00:02:51.847 LINK bdev_svc 00:02:51.847 CXX test/cpp_headers/ioat_spec.o 00:02:51.847 CXX test/cpp_headers/iscsi_spec.o 00:02:51.847 LINK hello_world 00:02:51.847 CXX test/cpp_headers/json.o 00:02:51.847 CXX test/cpp_headers/jsonrpc.o 00:02:51.847 LINK mkfs 00:02:51.847 CXX test/cpp_headers/likely.o 00:02:51.847 CXX test/cpp_headers/log.o 00:02:51.847 LINK fused_ordering 00:02:51.847 CXX test/cpp_headers/lvol.o 00:02:51.847 LINK verify 00:02:51.847 LINK hello_sock 00:02:51.847 LINK ioat_perf 00:02:51.847 CXX test/cpp_headers/memory.o 00:02:51.847 CXX test/cpp_headers/mmio.o 00:02:51.847 CXX test/cpp_headers/nbd.o 00:02:51.847 CXX test/cpp_headers/notify.o 00:02:51.847 CXX test/cpp_headers/nvme.o 00:02:51.847 LINK reserve 00:02:51.847 CXX test/cpp_headers/nvme_intel.o 00:02:51.847 CXX test/cpp_headers/nvme_ocssd.o 00:02:51.847 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:51.847 LINK hotplug 00:02:51.847 CXX test/cpp_headers/nvme_spec.o 00:02:51.847 CXX test/cpp_headers/nvme_zns.o 00:02:51.847 LINK hello_blob 00:02:51.847 CXX test/cpp_headers/nvmf_cmd.o 00:02:51.847 LINK reset 00:02:51.847 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:51.847 CXX test/cpp_headers/nvmf.o 00:02:51.847 CXX test/cpp_headers/nvmf_spec.o 00:02:51.847 CXX test/cpp_headers/nvmf_transport.o 00:02:51.847 CXX test/cpp_headers/opal.o 00:02:51.847 CXX test/cpp_headers/opal_spec.o 00:02:51.847 CXX test/cpp_headers/pci_ids.o 00:02:51.847 CXX test/cpp_headers/pipe.o 00:02:51.847 CXX test/cpp_headers/queue.o 00:02:51.847 CXX test/cpp_headers/reduce.o 00:02:51.847 LINK hello_bdev 00:02:51.847 CXX test/cpp_headers/rpc.o 00:02:51.847 LINK scheduler 00:02:51.847 CXX test/cpp_headers/scheduler.o 00:02:51.847 CXX test/cpp_headers/scsi.o 00:02:51.847 LINK simple_copy 00:02:51.847 CXX test/cpp_headers/scsi_spec.o 00:02:51.847 CXX test/cpp_headers/sock.o 00:02:51.847 CXX test/cpp_headers/stdinc.o 00:02:51.847 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:51.847 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:51.847 CXX test/cpp_headers/string.o 00:02:51.847 LINK sgl 00:02:51.847 LINK overhead 00:02:51.847 LINK thread 00:02:51.847 CXX test/cpp_headers/thread.o 00:02:51.847 LINK nvme_dp 00:02:51.847 CXX test/cpp_headers/trace.o 00:02:51.847 LINK mem_callbacks 00:02:51.847 LINK fdp 00:02:51.847 LINK aer 00:02:52.106 LINK arbitration 00:02:52.107 LINK nvmf 00:02:52.107 LINK reconnect 00:02:52.107 CXX test/cpp_headers/trace_parser.o 00:02:52.107 LINK idxd_perf 00:02:52.107 CXX test/cpp_headers/tree.o 00:02:52.107 LINK nvme_compliance 00:02:52.107 LINK spdk_dd 00:02:52.107 CXX test/cpp_headers/ublk.o 00:02:52.107 CXX test/cpp_headers/util.o 00:02:52.107 LINK pci_ut 00:02:52.107 CXX test/cpp_headers/uuid.o 00:02:52.107 CXX test/cpp_headers/version.o 00:02:52.107 CXX test/cpp_headers/vfio_user_pci.o 00:02:52.107 CXX test/cpp_headers/vfio_user_spec.o 00:02:52.107 CXX test/cpp_headers/vhost.o 00:02:52.107 CXX test/cpp_headers/vmd.o 00:02:52.107 LINK test_dma 00:02:52.107 CXX test/cpp_headers/xor.o 00:02:52.107 CXX test/cpp_headers/zipf.o 00:02:52.107 LINK spdk_trace 00:02:52.107 LINK dif 00:02:52.107 LINK abort 00:02:52.107 LINK bdevio 00:02:52.107 LINK accel_perf 00:02:52.107 LINK memory_ut 00:02:52.365 LINK nvme_manage 00:02:52.365 LINK nvme_fuzz 00:02:52.365 LINK blobcli 00:02:52.365 LINK spdk_nvme 00:02:52.365 LINK spdk_bdev 00:02:52.365 LINK spdk_nvme_identify 00:02:52.365 LINK vhost_fuzz 00:02:52.624 LINK bdevperf 00:02:52.625 LINK spdk_nvme_perf 00:02:52.625 LINK spdk_top 00:02:52.625 LINK cuse 00:02:53.194 LINK iscsi_fuzz 00:02:55.099 LINK esnap 00:02:55.359 00:02:55.359 real 0m30.529s 00:02:55.359 user 4m50.624s 00:02:55.359 sys 2m31.440s 00:02:55.359 06:43:16 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:55.359 06:43:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.359 ************************************ 00:02:55.359 END TEST make 00:02:55.359 ************************************ 00:02:55.359 06:43:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:55.359 06:43:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:55.359 06:43:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:55.359 06:43:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:55.359 06:43:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:55.359 06:43:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:55.359 06:43:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:55.359 06:43:16 -- scripts/common.sh@335 -- # IFS=.-: 00:02:55.359 06:43:16 -- scripts/common.sh@335 -- # read -ra ver1 00:02:55.359 06:43:16 -- scripts/common.sh@336 -- # IFS=.-: 00:02:55.359 06:43:16 -- scripts/common.sh@336 -- # read -ra ver2 00:02:55.359 06:43:16 -- scripts/common.sh@337 -- # local 'op=<' 00:02:55.359 06:43:16 -- scripts/common.sh@339 -- # ver1_l=2 00:02:55.359 06:43:16 -- scripts/common.sh@340 -- # ver2_l=1 00:02:55.359 06:43:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:55.359 06:43:16 -- scripts/common.sh@343 -- # case "$op" in 00:02:55.359 06:43:16 -- scripts/common.sh@344 -- # : 1 00:02:55.359 06:43:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:55.359 06:43:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:55.359 06:43:16 -- scripts/common.sh@364 -- # decimal 1 00:02:55.359 06:43:16 -- scripts/common.sh@352 -- # local d=1 00:02:55.359 06:43:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:55.359 06:43:16 -- scripts/common.sh@354 -- # echo 1 00:02:55.359 06:43:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:55.359 06:43:16 -- scripts/common.sh@365 -- # decimal 2 00:02:55.359 06:43:16 -- scripts/common.sh@352 -- # local d=2 00:02:55.619 06:43:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:55.619 06:43:16 -- scripts/common.sh@354 -- # echo 2 00:02:55.619 06:43:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:55.619 06:43:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:55.619 06:43:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:55.619 06:43:17 -- scripts/common.sh@367 -- # return 0 00:02:55.619 06:43:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:55.619 06:43:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:55.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.619 --rc genhtml_branch_coverage=1 00:02:55.619 --rc genhtml_function_coverage=1 00:02:55.619 --rc genhtml_legend=1 00:02:55.619 --rc geninfo_all_blocks=1 00:02:55.619 --rc geninfo_unexecuted_blocks=1 00:02:55.619 00:02:55.619 ' 00:02:55.619 06:43:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:55.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.619 --rc genhtml_branch_coverage=1 00:02:55.619 --rc genhtml_function_coverage=1 00:02:55.619 --rc genhtml_legend=1 00:02:55.619 --rc geninfo_all_blocks=1 00:02:55.619 --rc geninfo_unexecuted_blocks=1 00:02:55.619 00:02:55.619 ' 00:02:55.619 06:43:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:55.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.619 --rc genhtml_branch_coverage=1 00:02:55.619 --rc genhtml_function_coverage=1 00:02:55.619 --rc genhtml_legend=1 00:02:55.619 --rc geninfo_all_blocks=1 00:02:55.619 --rc geninfo_unexecuted_blocks=1 00:02:55.619 00:02:55.619 ' 00:02:55.619 06:43:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:55.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.619 --rc genhtml_branch_coverage=1 00:02:55.619 --rc genhtml_function_coverage=1 00:02:55.619 --rc genhtml_legend=1 00:02:55.619 --rc geninfo_all_blocks=1 00:02:55.619 --rc geninfo_unexecuted_blocks=1 00:02:55.619 00:02:55.619 ' 00:02:55.619 06:43:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:55.619 06:43:17 -- nvmf/common.sh@7 -- # uname -s 00:02:55.619 06:43:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:55.619 06:43:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:55.619 06:43:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:55.619 06:43:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:55.619 06:43:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:55.619 06:43:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:55.619 06:43:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:55.619 06:43:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:55.619 06:43:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:55.619 06:43:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:55.619 06:43:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:02:55.619 06:43:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:02:55.619 06:43:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:55.619 06:43:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:55.619 06:43:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:55.619 06:43:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:55.619 06:43:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:55.619 06:43:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:55.619 06:43:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:55.619 06:43:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.619 06:43:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.619 06:43:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.619 06:43:17 -- paths/export.sh@5 -- # export PATH 00:02:55.619 06:43:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.619 06:43:17 -- nvmf/common.sh@46 -- # : 0 00:02:55.619 06:43:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:55.619 06:43:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:55.619 06:43:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:55.619 06:43:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:55.619 06:43:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:55.620 06:43:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:55.620 06:43:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:55.620 06:43:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:55.620 06:43:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:55.620 06:43:17 -- spdk/autotest.sh@32 -- # uname -s 00:02:55.620 06:43:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:55.620 06:43:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:55.620 06:43:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:55.620 06:43:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:55.620 06:43:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:55.620 06:43:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:55.620 06:43:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:55.620 06:43:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:55.620 06:43:17 -- spdk/autotest.sh@48 -- # udevadm_pid=1123475 00:02:55.620 06:43:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:55.620 06:43:17 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:55.620 06:43:17 -- spdk/autotest.sh@54 -- # echo 1123477 00:02:55.620 06:43:17 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:55.620 06:43:17 -- spdk/autotest.sh@56 -- # echo 1123478 00:02:55.620 06:43:17 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:55.620 06:43:17 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:55.620 06:43:17 -- spdk/autotest.sh@60 -- # echo 1123479 00:02:55.620 06:43:17 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:55.620 06:43:17 -- spdk/autotest.sh@62 -- # echo 1123480 00:02:55.620 06:43:17 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:55.620 06:43:17 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:55.620 06:43:17 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:55.620 06:43:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:55.620 06:43:17 -- common/autotest_common.sh@10 -- # set +x 00:02:55.620 06:43:17 -- spdk/autotest.sh@70 -- # create_test_list 00:02:55.620 06:43:17 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:55.620 06:43:17 -- common/autotest_common.sh@10 -- # set +x 00:02:55.620 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:55.620 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:55.620 06:43:17 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:55.620 06:43:17 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:55.620 06:43:17 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:55.620 06:43:17 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:55.620 06:43:17 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:55.620 06:43:17 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:55.620 06:43:17 -- common/autotest_common.sh@1450 -- # uname 00:02:55.620 06:43:17 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:55.620 06:43:17 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:55.620 06:43:17 -- common/autotest_common.sh@1470 -- # uname 00:02:55.620 06:43:17 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:55.620 06:43:17 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:55.620 06:43:17 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:55.620 lcov: LCOV version 1.15 00:02:55.620 06:43:17 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:58.158 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:58.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:58.158 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:58.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:58.158 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:58.158 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:20.103 06:43:39 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:20.103 06:43:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:20.103 06:43:39 -- common/autotest_common.sh@10 -- # set +x 00:03:20.103 06:43:39 -- spdk/autotest.sh@89 -- # rm -f 00:03:20.103 06:43:39 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.484 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:21.484 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:21.484 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:21.484 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:21.484 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:21.743 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:21.743 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:21.743 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:21.743 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:21.743 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:21.743 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:21.743 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:21.743 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:21.743 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:22.002 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:22.002 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:22.002 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:22.002 06:43:43 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:22.002 06:43:43 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:22.002 06:43:43 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:22.002 06:43:43 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:22.002 06:43:43 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:22.002 06:43:43 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:22.002 06:43:43 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:22.002 06:43:43 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:22.002 06:43:43 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:22.002 06:43:43 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:22.002 06:43:43 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:03:22.002 06:43:43 -- spdk/autotest.sh@108 -- # grep -v p 00:03:22.002 06:43:43 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:22.002 06:43:43 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:22.002 06:43:43 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:22.002 06:43:43 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:22.002 06:43:43 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:22.002 No valid GPT data, bailing 00:03:22.002 06:43:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:22.002 06:43:43 -- scripts/common.sh@393 -- # pt= 00:03:22.002 06:43:43 -- scripts/common.sh@394 -- # return 1 00:03:22.002 06:43:43 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:22.002 1+0 records in 00:03:22.002 1+0 records out 00:03:22.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489806 s, 214 MB/s 00:03:22.002 06:43:43 -- spdk/autotest.sh@116 -- # sync 00:03:22.002 06:43:43 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:22.002 06:43:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:22.002 06:43:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:30.132 06:43:50 -- spdk/autotest.sh@122 -- # uname -s 00:03:30.132 06:43:50 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:30.132 06:43:50 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:30.132 06:43:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.132 06:43:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.132 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:03:30.132 ************************************ 00:03:30.132 START TEST setup.sh 00:03:30.132 ************************************ 00:03:30.132 06:43:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:30.132 * Looking for test storage... 00:03:30.132 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:30.132 06:43:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:30.132 06:43:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:30.132 06:43:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:30.132 06:43:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:30.132 06:43:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:30.132 06:43:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:30.132 06:43:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:30.132 06:43:50 -- scripts/common.sh@335 -- # IFS=.-: 00:03:30.132 06:43:50 -- scripts/common.sh@335 -- # read -ra ver1 00:03:30.132 06:43:50 -- scripts/common.sh@336 -- # IFS=.-: 00:03:30.132 06:43:50 -- scripts/common.sh@336 -- # read -ra ver2 00:03:30.132 06:43:50 -- scripts/common.sh@337 -- # local 'op=<' 00:03:30.132 06:43:50 -- scripts/common.sh@339 -- # ver1_l=2 00:03:30.132 06:43:50 -- scripts/common.sh@340 -- # ver2_l=1 00:03:30.132 06:43:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:30.132 06:43:50 -- scripts/common.sh@343 -- # case "$op" in 00:03:30.132 06:43:50 -- scripts/common.sh@344 -- # : 1 00:03:30.132 06:43:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:30.132 06:43:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:30.132 06:43:50 -- scripts/common.sh@364 -- # decimal 1 00:03:30.132 06:43:50 -- scripts/common.sh@352 -- # local d=1 00:03:30.132 06:43:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:30.132 06:43:50 -- scripts/common.sh@354 -- # echo 1 00:03:30.132 06:43:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:30.132 06:43:50 -- scripts/common.sh@365 -- # decimal 2 00:03:30.132 06:43:50 -- scripts/common.sh@352 -- # local d=2 00:03:30.132 06:43:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:30.132 06:43:51 -- scripts/common.sh@354 -- # echo 2 00:03:30.132 06:43:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:30.132 06:43:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:30.132 06:43:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:30.132 06:43:51 -- scripts/common.sh@367 -- # return 0 00:03:30.132 06:43:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:30.132 06:43:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:30.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.132 --rc genhtml_branch_coverage=1 00:03:30.133 --rc genhtml_function_coverage=1 00:03:30.133 --rc genhtml_legend=1 00:03:30.133 --rc geninfo_all_blocks=1 00:03:30.133 --rc geninfo_unexecuted_blocks=1 00:03:30.133 00:03:30.133 ' 00:03:30.133 06:43:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:30.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.133 --rc genhtml_branch_coverage=1 00:03:30.133 --rc genhtml_function_coverage=1 00:03:30.133 --rc genhtml_legend=1 00:03:30.133 --rc geninfo_all_blocks=1 00:03:30.133 --rc geninfo_unexecuted_blocks=1 00:03:30.133 00:03:30.133 ' 00:03:30.133 06:43:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:30.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.133 --rc genhtml_branch_coverage=1 00:03:30.133 --rc genhtml_function_coverage=1 00:03:30.133 --rc genhtml_legend=1 00:03:30.133 --rc geninfo_all_blocks=1 00:03:30.133 --rc geninfo_unexecuted_blocks=1 00:03:30.133 00:03:30.133 ' 00:03:30.133 06:43:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:30.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.133 --rc genhtml_branch_coverage=1 00:03:30.133 --rc genhtml_function_coverage=1 00:03:30.133 --rc genhtml_legend=1 00:03:30.133 --rc geninfo_all_blocks=1 00:03:30.133 --rc geninfo_unexecuted_blocks=1 00:03:30.133 00:03:30.133 ' 00:03:30.133 06:43:51 -- setup/test-setup.sh@10 -- # uname -s 00:03:30.133 06:43:51 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:30.133 06:43:51 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:30.133 06:43:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.133 06:43:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.133 06:43:51 -- common/autotest_common.sh@10 -- # set +x 00:03:30.133 ************************************ 00:03:30.133 START TEST acl 00:03:30.133 ************************************ 00:03:30.133 06:43:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:30.133 * Looking for test storage... 00:03:30.133 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:30.133 06:43:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:30.133 06:43:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:30.133 06:43:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:30.133 06:43:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:30.133 06:43:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:30.133 06:43:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:30.133 06:43:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:30.133 06:43:51 -- scripts/common.sh@335 -- # IFS=.-: 00:03:30.133 06:43:51 -- scripts/common.sh@335 -- # read -ra ver1 00:03:30.133 06:43:51 -- scripts/common.sh@336 -- # IFS=.-: 00:03:30.133 06:43:51 -- scripts/common.sh@336 -- # read -ra ver2 00:03:30.133 06:43:51 -- scripts/common.sh@337 -- # local 'op=<' 00:03:30.133 06:43:51 -- scripts/common.sh@339 -- # ver1_l=2 00:03:30.133 06:43:51 -- scripts/common.sh@340 -- # ver2_l=1 00:03:30.133 06:43:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:30.133 06:43:51 -- scripts/common.sh@343 -- # case "$op" in 00:03:30.133 06:43:51 -- scripts/common.sh@344 -- # : 1 00:03:30.133 06:43:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:30.133 06:43:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:30.133 06:43:51 -- scripts/common.sh@364 -- # decimal 1 00:03:30.133 06:43:51 -- scripts/common.sh@352 -- # local d=1 00:03:30.133 06:43:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:30.133 06:43:51 -- scripts/common.sh@354 -- # echo 1 00:03:30.133 06:43:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:30.133 06:43:51 -- scripts/common.sh@365 -- # decimal 2 00:03:30.133 06:43:51 -- scripts/common.sh@352 -- # local d=2 00:03:30.133 06:43:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:30.133 06:43:51 -- scripts/common.sh@354 -- # echo 2 00:03:30.133 06:43:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:30.133 06:43:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:30.133 06:43:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:30.133 06:43:51 -- scripts/common.sh@367 -- # return 0 00:03:30.133 06:43:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:30.133 06:43:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:30.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.133 --rc genhtml_branch_coverage=1 00:03:30.133 --rc genhtml_function_coverage=1 00:03:30.133 --rc genhtml_legend=1 00:03:30.133 --rc geninfo_all_blocks=1 00:03:30.133 --rc geninfo_unexecuted_blocks=1 00:03:30.133 00:03:30.133 ' 00:03:30.133 06:43:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:30.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.133 --rc genhtml_branch_coverage=1 00:03:30.133 --rc genhtml_function_coverage=1 00:03:30.133 --rc genhtml_legend=1 00:03:30.133 --rc geninfo_all_blocks=1 00:03:30.133 --rc geninfo_unexecuted_blocks=1 00:03:30.133 00:03:30.133 ' 00:03:30.133 06:43:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:30.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.133 --rc genhtml_branch_coverage=1 00:03:30.133 --rc genhtml_function_coverage=1 00:03:30.133 --rc genhtml_legend=1 00:03:30.133 --rc geninfo_all_blocks=1 00:03:30.133 --rc geninfo_unexecuted_blocks=1 00:03:30.133 00:03:30.133 ' 00:03:30.133 06:43:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:30.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.133 --rc genhtml_branch_coverage=1 00:03:30.133 --rc genhtml_function_coverage=1 00:03:30.133 --rc genhtml_legend=1 00:03:30.133 --rc geninfo_all_blocks=1 00:03:30.133 --rc geninfo_unexecuted_blocks=1 00:03:30.133 00:03:30.133 ' 00:03:30.133 06:43:51 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:30.133 06:43:51 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:30.133 06:43:51 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:30.133 06:43:51 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:30.133 06:43:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:30.133 06:43:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:30.133 06:43:51 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:30.133 06:43:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:30.133 06:43:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:30.133 06:43:51 -- setup/acl.sh@12 -- # devs=() 00:03:30.133 06:43:51 -- setup/acl.sh@12 -- # declare -a devs 00:03:30.133 06:43:51 -- setup/acl.sh@13 -- # drivers=() 00:03:30.133 06:43:51 -- setup/acl.sh@13 -- # declare -A drivers 00:03:30.133 06:43:51 -- setup/acl.sh@51 -- # setup reset 00:03:30.133 06:43:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.133 06:43:51 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.329 06:43:55 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:34.329 06:43:55 -- setup/acl.sh@16 -- # local dev driver 00:03:34.329 06:43:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.329 06:43:55 -- setup/acl.sh@15 -- # setup output status 00:03:34.329 06:43:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.329 06:43:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:36.921 Hugepages 00:03:36.921 node hugesize free / total 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 00:03:36.921 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.921 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.921 06:43:58 -- setup/acl.sh@20 -- # continue 00:03:36.921 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.181 06:43:58 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:37.181 06:43:58 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:37.181 06:43:58 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:37.181 06:43:58 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:37.181 06:43:58 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:37.181 06:43:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.181 06:43:58 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:37.181 06:43:58 -- setup/acl.sh@54 -- # run_test denied denied 00:03:37.181 06:43:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.181 06:43:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.181 06:43:58 -- common/autotest_common.sh@10 -- # set +x 00:03:37.181 ************************************ 00:03:37.181 START TEST denied 00:03:37.181 ************************************ 00:03:37.181 06:43:58 -- common/autotest_common.sh@1114 -- # denied 00:03:37.181 06:43:58 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:37.181 06:43:58 -- setup/acl.sh@38 -- # setup output config 00:03:37.181 06:43:58 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:37.181 06:43:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.181 06:43:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:41.378 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:41.378 06:44:02 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:41.378 06:44:02 -- setup/acl.sh@28 -- # local dev driver 00:03:41.378 06:44:02 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:41.378 06:44:02 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:41.378 06:44:02 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:41.378 06:44:02 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:41.378 06:44:02 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:41.378 06:44:02 -- setup/acl.sh@41 -- # setup reset 00:03:41.378 06:44:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.378 06:44:02 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.657 00:03:46.657 real 0m8.672s 00:03:46.657 user 0m2.761s 00:03:46.657 sys 0m5.231s 00:03:46.657 06:44:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:46.657 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:03:46.657 ************************************ 00:03:46.657 END TEST denied 00:03:46.657 ************************************ 00:03:46.657 06:44:07 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:46.657 06:44:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.657 06:44:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.657 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:03:46.657 ************************************ 00:03:46.657 START TEST allowed 00:03:46.657 ************************************ 00:03:46.657 06:44:07 -- common/autotest_common.sh@1114 -- # allowed 00:03:46.657 06:44:07 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:46.657 06:44:07 -- setup/acl.sh@45 -- # setup output config 00:03:46.657 06:44:07 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:46.657 06:44:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.657 06:44:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:51.934 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:51.935 06:44:12 -- setup/acl.sh@47 -- # verify 00:03:51.935 06:44:12 -- setup/acl.sh@28 -- # local dev driver 00:03:51.935 06:44:12 -- setup/acl.sh@48 -- # setup reset 00:03:51.935 06:44:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.935 06:44:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.132 00:03:56.132 real 0m9.685s 00:03:56.132 user 0m2.654s 00:03:56.132 sys 0m5.240s 00:03:56.132 06:44:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:56.132 06:44:17 -- common/autotest_common.sh@10 -- # set +x 00:03:56.132 ************************************ 00:03:56.132 END TEST allowed 00:03:56.132 ************************************ 00:03:56.132 00:03:56.132 real 0m26.128s 00:03:56.132 user 0m8.295s 00:03:56.132 sys 0m15.676s 00:03:56.132 06:44:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:56.132 06:44:17 -- common/autotest_common.sh@10 -- # set +x 00:03:56.132 ************************************ 00:03:56.132 END TEST acl 00:03:56.132 ************************************ 00:03:56.132 06:44:17 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:56.132 06:44:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.132 06:44:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.132 06:44:17 -- common/autotest_common.sh@10 -- # set +x 00:03:56.132 ************************************ 00:03:56.132 START TEST hugepages 00:03:56.132 ************************************ 00:03:56.132 06:44:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:56.132 * Looking for test storage... 00:03:56.132 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:56.132 06:44:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:56.132 06:44:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:56.132 06:44:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:56.132 06:44:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:56.132 06:44:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:56.132 06:44:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:56.132 06:44:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:56.132 06:44:17 -- scripts/common.sh@335 -- # IFS=.-: 00:03:56.132 06:44:17 -- scripts/common.sh@335 -- # read -ra ver1 00:03:56.132 06:44:17 -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.132 06:44:17 -- scripts/common.sh@336 -- # read -ra ver2 00:03:56.132 06:44:17 -- scripts/common.sh@337 -- # local 'op=<' 00:03:56.132 06:44:17 -- scripts/common.sh@339 -- # ver1_l=2 00:03:56.132 06:44:17 -- scripts/common.sh@340 -- # ver2_l=1 00:03:56.132 06:44:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:56.132 06:44:17 -- scripts/common.sh@343 -- # case "$op" in 00:03:56.132 06:44:17 -- scripts/common.sh@344 -- # : 1 00:03:56.132 06:44:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:56.132 06:44:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.132 06:44:17 -- scripts/common.sh@364 -- # decimal 1 00:03:56.132 06:44:17 -- scripts/common.sh@352 -- # local d=1 00:03:56.132 06:44:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.132 06:44:17 -- scripts/common.sh@354 -- # echo 1 00:03:56.132 06:44:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:56.132 06:44:17 -- scripts/common.sh@365 -- # decimal 2 00:03:56.132 06:44:17 -- scripts/common.sh@352 -- # local d=2 00:03:56.132 06:44:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.132 06:44:17 -- scripts/common.sh@354 -- # echo 2 00:03:56.132 06:44:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:56.132 06:44:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:56.132 06:44:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:56.132 06:44:17 -- scripts/common.sh@367 -- # return 0 00:03:56.132 06:44:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.132 06:44:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:56.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.132 --rc genhtml_branch_coverage=1 00:03:56.132 --rc genhtml_function_coverage=1 00:03:56.132 --rc genhtml_legend=1 00:03:56.132 --rc geninfo_all_blocks=1 00:03:56.132 --rc geninfo_unexecuted_blocks=1 00:03:56.132 00:03:56.132 ' 00:03:56.132 06:44:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:56.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.132 --rc genhtml_branch_coverage=1 00:03:56.132 --rc genhtml_function_coverage=1 00:03:56.132 --rc genhtml_legend=1 00:03:56.132 --rc geninfo_all_blocks=1 00:03:56.132 --rc geninfo_unexecuted_blocks=1 00:03:56.132 00:03:56.132 ' 00:03:56.132 06:44:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:56.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.132 --rc genhtml_branch_coverage=1 00:03:56.132 --rc genhtml_function_coverage=1 00:03:56.132 --rc genhtml_legend=1 00:03:56.132 --rc geninfo_all_blocks=1 00:03:56.132 --rc geninfo_unexecuted_blocks=1 00:03:56.132 00:03:56.132 ' 00:03:56.132 06:44:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:56.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.132 --rc genhtml_branch_coverage=1 00:03:56.132 --rc genhtml_function_coverage=1 00:03:56.132 --rc genhtml_legend=1 00:03:56.132 --rc geninfo_all_blocks=1 00:03:56.132 --rc geninfo_unexecuted_blocks=1 00:03:56.132 00:03:56.132 ' 00:03:56.132 06:44:17 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:56.132 06:44:17 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:56.132 06:44:17 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:56.132 06:44:17 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:56.132 06:44:17 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:56.132 06:44:17 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:56.132 06:44:17 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:56.132 06:44:17 -- setup/common.sh@18 -- # local node= 00:03:56.132 06:44:17 -- setup/common.sh@19 -- # local var val 00:03:56.132 06:44:17 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.132 06:44:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.132 06:44:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.132 06:44:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.132 06:44:17 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.132 06:44:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.132 06:44:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 41471636 kB' 'MemAvailable: 45186424 kB' 'Buffers: 4100 kB' 'Cached: 10461872 kB' 'SwapCached: 0 kB' 'Active: 7270528 kB' 'Inactive: 3683044 kB' 'Active(anon): 6881864 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490956 kB' 'Mapped: 196316 kB' 'Shmem: 6394264 kB' 'KReclaimable: 280340 kB' 'Slab: 1045416 kB' 'SReclaimable: 280340 kB' 'SUnreclaim: 765076 kB' 'KernelStack: 22016 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433340 kB' 'Committed_AS: 8058416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217724 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.132 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.132 06:44:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.133 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.133 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # continue 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.134 06:44:17 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.134 06:44:17 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.134 06:44:17 -- setup/common.sh@33 -- # echo 2048 00:03:56.134 06:44:17 -- setup/common.sh@33 -- # return 0 00:03:56.134 06:44:17 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:56.134 06:44:17 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:56.134 06:44:17 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:56.134 06:44:17 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:56.134 06:44:17 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:56.134 06:44:17 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:56.134 06:44:17 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:56.134 06:44:17 -- setup/hugepages.sh@207 -- # get_nodes 00:03:56.134 06:44:17 -- setup/hugepages.sh@27 -- # local node 00:03:56.134 06:44:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.134 06:44:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:56.134 06:44:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.134 06:44:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:56.134 06:44:17 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.134 06:44:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.134 06:44:17 -- setup/hugepages.sh@208 -- # clear_hp 00:03:56.134 06:44:17 -- setup/hugepages.sh@37 -- # local node hp 00:03:56.134 06:44:17 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:56.134 06:44:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.134 06:44:17 -- setup/hugepages.sh@41 -- # echo 0 00:03:56.134 06:44:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.134 06:44:17 -- setup/hugepages.sh@41 -- # echo 0 00:03:56.134 06:44:17 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:56.134 06:44:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.134 06:44:17 -- setup/hugepages.sh@41 -- # echo 0 00:03:56.134 06:44:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.134 06:44:17 -- setup/hugepages.sh@41 -- # echo 0 00:03:56.134 06:44:17 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:56.134 06:44:17 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:56.134 06:44:17 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:56.134 06:44:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.134 06:44:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.134 06:44:17 -- common/autotest_common.sh@10 -- # set +x 00:03:56.134 ************************************ 00:03:56.134 START TEST default_setup 00:03:56.134 ************************************ 00:03:56.134 06:44:17 -- common/autotest_common.sh@1114 -- # default_setup 00:03:56.134 06:44:17 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:56.134 06:44:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:56.134 06:44:17 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:56.134 06:44:17 -- setup/hugepages.sh@51 -- # shift 00:03:56.134 06:44:17 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:56.134 06:44:17 -- setup/hugepages.sh@52 -- # local node_ids 00:03:56.134 06:44:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.134 06:44:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:56.134 06:44:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:56.134 06:44:17 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:56.134 06:44:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.134 06:44:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.134 06:44:17 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.134 06:44:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.134 06:44:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.134 06:44:17 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:56.134 06:44:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:56.134 06:44:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:56.134 06:44:17 -- setup/hugepages.sh@73 -- # return 0 00:03:56.134 06:44:17 -- setup/hugepages.sh@137 -- # setup output 00:03:56.134 06:44:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.134 06:44:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:59.427 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:59.427 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.970 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.970 06:44:23 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:01.970 06:44:23 -- setup/hugepages.sh@89 -- # local node 00:04:01.970 06:44:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.970 06:44:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.970 06:44:23 -- setup/hugepages.sh@92 -- # local surp 00:04:01.970 06:44:23 -- setup/hugepages.sh@93 -- # local resv 00:04:01.970 06:44:23 -- setup/hugepages.sh@94 -- # local anon 00:04:01.970 06:44:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.970 06:44:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.970 06:44:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.970 06:44:23 -- setup/common.sh@18 -- # local node= 00:04:01.970 06:44:23 -- setup/common.sh@19 -- # local var val 00:04:01.970 06:44:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.970 06:44:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.970 06:44:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.970 06:44:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.970 06:44:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.970 06:44:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43659964 kB' 'MemAvailable: 47374492 kB' 'Buffers: 4100 kB' 'Cached: 10462008 kB' 'SwapCached: 0 kB' 'Active: 7273284 kB' 'Inactive: 3683044 kB' 'Active(anon): 6884620 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493676 kB' 'Mapped: 196064 kB' 'Shmem: 6394400 kB' 'KReclaimable: 279824 kB' 'Slab: 1043744 kB' 'SReclaimable: 279824 kB' 'SUnreclaim: 763920 kB' 'KernelStack: 22048 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8061700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217932 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.970 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.970 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.971 06:44:23 -- setup/common.sh@33 -- # echo 0 00:04:01.971 06:44:23 -- setup/common.sh@33 -- # return 0 00:04:01.971 06:44:23 -- setup/hugepages.sh@97 -- # anon=0 00:04:01.971 06:44:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.971 06:44:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.971 06:44:23 -- setup/common.sh@18 -- # local node= 00:04:01.971 06:44:23 -- setup/common.sh@19 -- # local var val 00:04:01.971 06:44:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.971 06:44:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.971 06:44:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.971 06:44:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.971 06:44:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.971 06:44:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.971 06:44:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43662248 kB' 'MemAvailable: 47376760 kB' 'Buffers: 4100 kB' 'Cached: 10462012 kB' 'SwapCached: 0 kB' 'Active: 7272884 kB' 'Inactive: 3683044 kB' 'Active(anon): 6884220 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493156 kB' 'Mapped: 196088 kB' 'Shmem: 6394404 kB' 'KReclaimable: 279792 kB' 'Slab: 1043748 kB' 'SReclaimable: 279792 kB' 'SUnreclaim: 763956 kB' 'KernelStack: 22000 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8061960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217916 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.971 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.971 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.972 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.972 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.973 06:44:23 -- setup/common.sh@33 -- # echo 0 00:04:01.973 06:44:23 -- setup/common.sh@33 -- # return 0 00:04:01.973 06:44:23 -- setup/hugepages.sh@99 -- # surp=0 00:04:01.973 06:44:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.973 06:44:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.973 06:44:23 -- setup/common.sh@18 -- # local node= 00:04:01.973 06:44:23 -- setup/common.sh@19 -- # local var val 00:04:01.973 06:44:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.973 06:44:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.973 06:44:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.973 06:44:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.973 06:44:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.973 06:44:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43664104 kB' 'MemAvailable: 47378616 kB' 'Buffers: 4100 kB' 'Cached: 10462024 kB' 'SwapCached: 0 kB' 'Active: 7272696 kB' 'Inactive: 3683044 kB' 'Active(anon): 6884032 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492896 kB' 'Mapped: 196088 kB' 'Shmem: 6394416 kB' 'KReclaimable: 279792 kB' 'Slab: 1043748 kB' 'SReclaimable: 279792 kB' 'SUnreclaim: 763956 kB' 'KernelStack: 22016 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8061976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.973 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.973 06:44:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.974 06:44:23 -- setup/common.sh@33 -- # echo 0 00:04:01.974 06:44:23 -- setup/common.sh@33 -- # return 0 00:04:01.974 06:44:23 -- setup/hugepages.sh@100 -- # resv=0 00:04:01.974 06:44:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.974 nr_hugepages=1024 00:04:01.974 06:44:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.974 resv_hugepages=0 00:04:01.974 06:44:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.974 surplus_hugepages=0 00:04:01.974 06:44:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.974 anon_hugepages=0 00:04:01.974 06:44:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.974 06:44:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.974 06:44:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.974 06:44:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.974 06:44:23 -- setup/common.sh@18 -- # local node= 00:04:01.974 06:44:23 -- setup/common.sh@19 -- # local var val 00:04:01.974 06:44:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.974 06:44:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.974 06:44:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.974 06:44:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.974 06:44:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.974 06:44:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.974 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.974 06:44:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43660112 kB' 'MemAvailable: 47374624 kB' 'Buffers: 4100 kB' 'Cached: 10462036 kB' 'SwapCached: 0 kB' 'Active: 7272488 kB' 'Inactive: 3683044 kB' 'Active(anon): 6883824 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492736 kB' 'Mapped: 196088 kB' 'Shmem: 6394428 kB' 'KReclaimable: 279792 kB' 'Slab: 1043748 kB' 'SReclaimable: 279792 kB' 'SUnreclaim: 763956 kB' 'KernelStack: 22032 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8061992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.975 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.975 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.976 06:44:23 -- setup/common.sh@33 -- # echo 1024 00:04:01.976 06:44:23 -- setup/common.sh@33 -- # return 0 00:04:01.976 06:44:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.976 06:44:23 -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.976 06:44:23 -- setup/hugepages.sh@27 -- # local node 00:04:01.976 06:44:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.976 06:44:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.976 06:44:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.976 06:44:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:01.976 06:44:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.976 06:44:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.976 06:44:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.976 06:44:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.976 06:44:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.976 06:44:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.976 06:44:23 -- setup/common.sh@18 -- # local node=0 00:04:01.976 06:44:23 -- setup/common.sh@19 -- # local var val 00:04:01.976 06:44:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.976 06:44:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.976 06:44:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.976 06:44:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.976 06:44:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.976 06:44:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.976 06:44:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 19986320 kB' 'MemUsed: 12648116 kB' 'SwapCached: 0 kB' 'Active: 6022784 kB' 'Inactive: 3487008 kB' 'Active(anon): 5899368 kB' 'Inactive(anon): 0 kB' 'Active(file): 123416 kB' 'Inactive(file): 3487008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9186500 kB' 'Mapped: 47540 kB' 'AnonPages: 326392 kB' 'Shmem: 5576076 kB' 'KernelStack: 12136 kB' 'PageTables: 5008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102372 kB' 'Slab: 473636 kB' 'SReclaimable: 102372 kB' 'SUnreclaim: 371264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.976 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.976 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # continue 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.977 06:44:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.977 06:44:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.977 06:44:23 -- setup/common.sh@33 -- # echo 0 00:04:01.977 06:44:23 -- setup/common.sh@33 -- # return 0 00:04:01.977 06:44:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.977 06:44:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.977 06:44:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.977 06:44:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.977 06:44:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.977 node0=1024 expecting 1024 00:04:01.977 06:44:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.977 00:04:01.977 real 0m5.866s 00:04:01.977 user 0m1.458s 00:04:01.977 sys 0m2.522s 00:04:01.977 06:44:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.977 06:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:01.977 ************************************ 00:04:01.977 END TEST default_setup 00:04:01.977 ************************************ 00:04:01.977 06:44:23 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:01.977 06:44:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.977 06:44:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.977 06:44:23 -- common/autotest_common.sh@10 -- # set +x 00:04:01.977 ************************************ 00:04:01.977 START TEST per_node_1G_alloc 00:04:01.977 ************************************ 00:04:01.977 06:44:23 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:01.977 06:44:23 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:01.977 06:44:23 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:01.977 06:44:23 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:01.977 06:44:23 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:01.977 06:44:23 -- setup/hugepages.sh@51 -- # shift 00:04:01.977 06:44:23 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:01.977 06:44:23 -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.977 06:44:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.977 06:44:23 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:01.977 06:44:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:01.977 06:44:23 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:01.977 06:44:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.977 06:44:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:01.977 06:44:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.977 06:44:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.977 06:44:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.977 06:44:23 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:01.977 06:44:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.977 06:44:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:01.977 06:44:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.977 06:44:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:01.977 06:44:23 -- setup/hugepages.sh@73 -- # return 0 00:04:01.977 06:44:23 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:01.977 06:44:23 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:01.977 06:44:23 -- setup/hugepages.sh@146 -- # setup output 00:04:01.977 06:44:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.977 06:44:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:05.306 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.306 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.306 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.306 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.307 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.307 06:44:26 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:05.307 06:44:26 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:05.307 06:44:26 -- setup/hugepages.sh@89 -- # local node 00:04:05.307 06:44:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.307 06:44:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.307 06:44:26 -- setup/hugepages.sh@92 -- # local surp 00:04:05.307 06:44:26 -- setup/hugepages.sh@93 -- # local resv 00:04:05.307 06:44:26 -- setup/hugepages.sh@94 -- # local anon 00:04:05.307 06:44:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.307 06:44:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.307 06:44:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.307 06:44:26 -- setup/common.sh@18 -- # local node= 00:04:05.307 06:44:26 -- setup/common.sh@19 -- # local var val 00:04:05.307 06:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.307 06:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.307 06:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.307 06:44:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.307 06:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.307 06:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 06:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43659540 kB' 'MemAvailable: 47374052 kB' 'Buffers: 4100 kB' 'Cached: 10462144 kB' 'SwapCached: 0 kB' 'Active: 7272468 kB' 'Inactive: 3683044 kB' 'Active(anon): 6883804 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492556 kB' 'Mapped: 194992 kB' 'Shmem: 6394536 kB' 'KReclaimable: 279792 kB' 'Slab: 1043272 kB' 'SReclaimable: 279792 kB' 'SUnreclaim: 763480 kB' 'KernelStack: 21952 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8050948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.307 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.307 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.571 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.571 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.572 06:44:26 -- setup/common.sh@33 -- # echo 0 00:04:05.572 06:44:26 -- setup/common.sh@33 -- # return 0 00:04:05.572 06:44:26 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.572 06:44:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.572 06:44:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.572 06:44:26 -- setup/common.sh@18 -- # local node= 00:04:05.572 06:44:26 -- setup/common.sh@19 -- # local var val 00:04:05.572 06:44:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.572 06:44:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.572 06:44:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.572 06:44:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.572 06:44:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.572 06:44:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43661320 kB' 'MemAvailable: 47375832 kB' 'Buffers: 4100 kB' 'Cached: 10462144 kB' 'SwapCached: 0 kB' 'Active: 7272016 kB' 'Inactive: 3683044 kB' 'Active(anon): 6883352 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492108 kB' 'Mapped: 194908 kB' 'Shmem: 6394536 kB' 'KReclaimable: 279792 kB' 'Slab: 1043212 kB' 'SReclaimable: 279792 kB' 'SUnreclaim: 763420 kB' 'KernelStack: 21936 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8050960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.572 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.572 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.573 06:44:26 -- setup/common.sh@33 -- # echo 0 00:04:05.573 06:44:26 -- setup/common.sh@33 -- # return 0 00:04:05.573 06:44:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.573 06:44:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.573 06:44:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.573 06:44:26 -- setup/common.sh@18 -- # local node= 00:04:05.573 06:44:26 -- setup/common.sh@19 -- # local var val 00:04:05.573 06:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.573 06:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.573 06:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.573 06:44:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.573 06:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.573 06:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.573 06:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43661960 kB' 'MemAvailable: 47376472 kB' 'Buffers: 4100 kB' 'Cached: 10462156 kB' 'SwapCached: 0 kB' 'Active: 7272020 kB' 'Inactive: 3683044 kB' 'Active(anon): 6883356 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492100 kB' 'Mapped: 194908 kB' 'Shmem: 6394548 kB' 'KReclaimable: 279792 kB' 'Slab: 1043236 kB' 'SReclaimable: 279792 kB' 'SUnreclaim: 763444 kB' 'KernelStack: 21920 kB' 'PageTables: 7700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8050972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.573 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.573 06:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.574 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.574 06:44:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.574 06:44:27 -- setup/common.sh@33 -- # echo 0 00:04:05.574 06:44:27 -- setup/common.sh@33 -- # return 0 00:04:05.574 06:44:27 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.574 06:44:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.574 nr_hugepages=1024 00:04:05.574 06:44:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.574 resv_hugepages=0 00:04:05.574 06:44:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.574 surplus_hugepages=0 00:04:05.574 06:44:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.574 anon_hugepages=0 00:04:05.574 06:44:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.574 06:44:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.574 06:44:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.574 06:44:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.574 06:44:27 -- setup/common.sh@18 -- # local node= 00:04:05.574 06:44:27 -- setup/common.sh@19 -- # local var val 00:04:05.575 06:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.575 06:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.575 06:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.575 06:44:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.575 06:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.575 06:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43661960 kB' 'MemAvailable: 47376472 kB' 'Buffers: 4100 kB' 'Cached: 10462184 kB' 'SwapCached: 0 kB' 'Active: 7271684 kB' 'Inactive: 3683044 kB' 'Active(anon): 6883020 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491692 kB' 'Mapped: 194908 kB' 'Shmem: 6394576 kB' 'KReclaimable: 279792 kB' 'Slab: 1043236 kB' 'SReclaimable: 279792 kB' 'SUnreclaim: 763444 kB' 'KernelStack: 21904 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8050988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.575 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.575 06:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.576 06:44:27 -- setup/common.sh@33 -- # echo 1024 00:04:05.576 06:44:27 -- setup/common.sh@33 -- # return 0 00:04:05.576 06:44:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.576 06:44:27 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.576 06:44:27 -- setup/hugepages.sh@27 -- # local node 00:04:05.576 06:44:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.576 06:44:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.576 06:44:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.576 06:44:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.576 06:44:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.576 06:44:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.576 06:44:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.576 06:44:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.576 06:44:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.576 06:44:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.576 06:44:27 -- setup/common.sh@18 -- # local node=0 00:04:05.576 06:44:27 -- setup/common.sh@19 -- # local var val 00:04:05.576 06:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.576 06:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.576 06:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.576 06:44:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.576 06:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.576 06:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 21026452 kB' 'MemUsed: 11607984 kB' 'SwapCached: 0 kB' 'Active: 6022040 kB' 'Inactive: 3487008 kB' 'Active(anon): 5898624 kB' 'Inactive(anon): 0 kB' 'Active(file): 123416 kB' 'Inactive(file): 3487008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9186516 kB' 'Mapped: 46400 kB' 'AnonPages: 325692 kB' 'Shmem: 5576092 kB' 'KernelStack: 12024 kB' 'PageTables: 4688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102372 kB' 'Slab: 473440 kB' 'SReclaimable: 102372 kB' 'SUnreclaim: 371068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.576 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.576 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@33 -- # echo 0 00:04:05.577 06:44:27 -- setup/common.sh@33 -- # return 0 00:04:05.577 06:44:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.577 06:44:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.577 06:44:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.577 06:44:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:05.577 06:44:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.577 06:44:27 -- setup/common.sh@18 -- # local node=1 00:04:05.577 06:44:27 -- setup/common.sh@19 -- # local var val 00:04:05.577 06:44:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.577 06:44:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.577 06:44:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:05.577 06:44:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:05.577 06:44:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.577 06:44:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649344 kB' 'MemFree: 22628220 kB' 'MemUsed: 5021124 kB' 'SwapCached: 0 kB' 'Active: 1255656 kB' 'Inactive: 196036 kB' 'Active(anon): 990408 kB' 'Inactive(anon): 0 kB' 'Active(file): 265248 kB' 'Inactive(file): 196036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1279784 kB' 'Mapped: 149408 kB' 'AnonPages: 172052 kB' 'Shmem: 818500 kB' 'KernelStack: 9896 kB' 'PageTables: 3040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177420 kB' 'Slab: 569796 kB' 'SReclaimable: 177420 kB' 'SUnreclaim: 392376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.577 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.577 06:44:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # continue 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.578 06:44:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.578 06:44:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.578 06:44:27 -- setup/common.sh@33 -- # echo 0 00:04:05.578 06:44:27 -- setup/common.sh@33 -- # return 0 00:04:05.578 06:44:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.578 06:44:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.578 06:44:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.578 06:44:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.578 06:44:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.578 node0=512 expecting 512 00:04:05.578 06:44:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.578 06:44:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.578 06:44:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.578 06:44:27 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:05.578 node1=512 expecting 512 00:04:05.578 06:44:27 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:05.578 00:04:05.578 real 0m3.758s 00:04:05.578 user 0m1.433s 00:04:05.578 sys 0m2.400s 00:04:05.578 06:44:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.578 06:44:27 -- common/autotest_common.sh@10 -- # set +x 00:04:05.578 ************************************ 00:04:05.578 END TEST per_node_1G_alloc 00:04:05.578 ************************************ 00:04:05.578 06:44:27 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:05.578 06:44:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.578 06:44:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.578 06:44:27 -- common/autotest_common.sh@10 -- # set +x 00:04:05.578 ************************************ 00:04:05.578 START TEST even_2G_alloc 00:04:05.578 ************************************ 00:04:05.578 06:44:27 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:05.578 06:44:27 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:05.578 06:44:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.578 06:44:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.578 06:44:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.578 06:44:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.578 06:44:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.578 06:44:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.578 06:44:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.578 06:44:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.578 06:44:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.578 06:44:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.578 06:44:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.578 06:44:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.578 06:44:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.578 06:44:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.578 06:44:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.578 06:44:27 -- setup/hugepages.sh@83 -- # : 512 00:04:05.578 06:44:27 -- setup/hugepages.sh@84 -- # : 1 00:04:05.578 06:44:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.578 06:44:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.578 06:44:27 -- setup/hugepages.sh@83 -- # : 0 00:04:05.578 06:44:27 -- setup/hugepages.sh@84 -- # : 0 00:04:05.578 06:44:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.578 06:44:27 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:05.578 06:44:27 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:05.578 06:44:27 -- setup/hugepages.sh@153 -- # setup output 00:04:05.578 06:44:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.578 06:44:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:09.780 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.780 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.780 06:44:30 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:09.780 06:44:30 -- setup/hugepages.sh@89 -- # local node 00:04:09.780 06:44:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.780 06:44:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.780 06:44:30 -- setup/hugepages.sh@92 -- # local surp 00:04:09.780 06:44:30 -- setup/hugepages.sh@93 -- # local resv 00:04:09.780 06:44:30 -- setup/hugepages.sh@94 -- # local anon 00:04:09.780 06:44:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.780 06:44:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.780 06:44:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.780 06:44:30 -- setup/common.sh@18 -- # local node= 00:04:09.780 06:44:30 -- setup/common.sh@19 -- # local var val 00:04:09.780 06:44:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.780 06:44:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.781 06:44:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.781 06:44:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.781 06:44:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.781 06:44:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43688036 kB' 'MemAvailable: 47402536 kB' 'Buffers: 4100 kB' 'Cached: 10462272 kB' 'SwapCached: 0 kB' 'Active: 7273424 kB' 'Inactive: 3683044 kB' 'Active(anon): 6884760 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493448 kB' 'Mapped: 195024 kB' 'Shmem: 6394664 kB' 'KReclaimable: 279768 kB' 'Slab: 1043392 kB' 'SReclaimable: 279768 kB' 'SUnreclaim: 763624 kB' 'KernelStack: 21920 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8051592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217852 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.781 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.781 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.782 06:44:30 -- setup/common.sh@33 -- # echo 0 00:04:09.782 06:44:30 -- setup/common.sh@33 -- # return 0 00:04:09.782 06:44:30 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.782 06:44:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.782 06:44:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.782 06:44:30 -- setup/common.sh@18 -- # local node= 00:04:09.782 06:44:30 -- setup/common.sh@19 -- # local var val 00:04:09.782 06:44:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.782 06:44:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.782 06:44:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.782 06:44:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.782 06:44:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.782 06:44:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43688068 kB' 'MemAvailable: 47402568 kB' 'Buffers: 4100 kB' 'Cached: 10462276 kB' 'SwapCached: 0 kB' 'Active: 7273104 kB' 'Inactive: 3683044 kB' 'Active(anon): 6884440 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493048 kB' 'Mapped: 194932 kB' 'Shmem: 6394668 kB' 'KReclaimable: 279768 kB' 'Slab: 1043336 kB' 'SReclaimable: 279768 kB' 'SUnreclaim: 763568 kB' 'KernelStack: 21920 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8051604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217852 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.782 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.782 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.783 06:44:30 -- setup/common.sh@33 -- # echo 0 00:04:09.783 06:44:30 -- setup/common.sh@33 -- # return 0 00:04:09.783 06:44:30 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.783 06:44:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.783 06:44:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.783 06:44:30 -- setup/common.sh@18 -- # local node= 00:04:09.783 06:44:30 -- setup/common.sh@19 -- # local var val 00:04:09.783 06:44:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.783 06:44:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.783 06:44:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.783 06:44:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.783 06:44:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.783 06:44:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43687656 kB' 'MemAvailable: 47402156 kB' 'Buffers: 4100 kB' 'Cached: 10462284 kB' 'SwapCached: 0 kB' 'Active: 7273084 kB' 'Inactive: 3683044 kB' 'Active(anon): 6884420 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493052 kB' 'Mapped: 194932 kB' 'Shmem: 6394676 kB' 'KReclaimable: 279768 kB' 'Slab: 1043336 kB' 'SReclaimable: 279768 kB' 'SUnreclaim: 763568 kB' 'KernelStack: 21920 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8051620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217852 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.783 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.783 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.784 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.784 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.785 06:44:30 -- setup/common.sh@33 -- # echo 0 00:04:09.785 06:44:30 -- setup/common.sh@33 -- # return 0 00:04:09.785 06:44:30 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.785 06:44:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.785 nr_hugepages=1024 00:04:09.785 06:44:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.785 resv_hugepages=0 00:04:09.785 06:44:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.785 surplus_hugepages=0 00:04:09.785 06:44:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.785 anon_hugepages=0 00:04:09.785 06:44:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.785 06:44:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.785 06:44:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.785 06:44:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.785 06:44:30 -- setup/common.sh@18 -- # local node= 00:04:09.785 06:44:30 -- setup/common.sh@19 -- # local var val 00:04:09.785 06:44:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.785 06:44:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.785 06:44:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.785 06:44:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.785 06:44:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.785 06:44:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.785 06:44:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43687656 kB' 'MemAvailable: 47402156 kB' 'Buffers: 4100 kB' 'Cached: 10462284 kB' 'SwapCached: 0 kB' 'Active: 7273084 kB' 'Inactive: 3683044 kB' 'Active(anon): 6884420 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493052 kB' 'Mapped: 194932 kB' 'Shmem: 6394676 kB' 'KReclaimable: 279768 kB' 'Slab: 1043336 kB' 'SReclaimable: 279768 kB' 'SUnreclaim: 763568 kB' 'KernelStack: 21920 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8051632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217852 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.785 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.785 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.786 06:44:30 -- setup/common.sh@33 -- # echo 1024 00:04:09.786 06:44:30 -- setup/common.sh@33 -- # return 0 00:04:09.786 06:44:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.786 06:44:30 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.786 06:44:30 -- setup/hugepages.sh@27 -- # local node 00:04:09.786 06:44:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.786 06:44:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.786 06:44:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.786 06:44:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.786 06:44:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.786 06:44:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.786 06:44:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.786 06:44:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.786 06:44:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.786 06:44:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.786 06:44:30 -- setup/common.sh@18 -- # local node=0 00:04:09.786 06:44:30 -- setup/common.sh@19 -- # local var val 00:04:09.786 06:44:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.786 06:44:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.786 06:44:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.786 06:44:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.786 06:44:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.786 06:44:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 21033188 kB' 'MemUsed: 11601248 kB' 'SwapCached: 0 kB' 'Active: 6022940 kB' 'Inactive: 3487008 kB' 'Active(anon): 5899524 kB' 'Inactive(anon): 0 kB' 'Active(file): 123416 kB' 'Inactive(file): 3487008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9186588 kB' 'Mapped: 46400 kB' 'AnonPages: 326536 kB' 'Shmem: 5576164 kB' 'KernelStack: 12024 kB' 'PageTables: 4680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102356 kB' 'Slab: 473424 kB' 'SReclaimable: 102356 kB' 'SUnreclaim: 371068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.786 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.786 06:44:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@33 -- # echo 0 00:04:09.787 06:44:30 -- setup/common.sh@33 -- # return 0 00:04:09.787 06:44:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.787 06:44:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.787 06:44:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.787 06:44:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:09.787 06:44:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.787 06:44:30 -- setup/common.sh@18 -- # local node=1 00:04:09.787 06:44:30 -- setup/common.sh@19 -- # local var val 00:04:09.787 06:44:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.787 06:44:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.787 06:44:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:09.787 06:44:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:09.787 06:44:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.787 06:44:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.787 06:44:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649344 kB' 'MemFree: 22654472 kB' 'MemUsed: 4994872 kB' 'SwapCached: 0 kB' 'Active: 1250216 kB' 'Inactive: 196036 kB' 'Active(anon): 984968 kB' 'Inactive(anon): 0 kB' 'Active(file): 265248 kB' 'Inactive(file): 196036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1279828 kB' 'Mapped: 148532 kB' 'AnonPages: 166516 kB' 'Shmem: 818544 kB' 'KernelStack: 9880 kB' 'PageTables: 2960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177412 kB' 'Slab: 569912 kB' 'SReclaimable: 177412 kB' 'SUnreclaim: 392500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.787 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.787 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # continue 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.788 06:44:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.788 06:44:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.788 06:44:30 -- setup/common.sh@33 -- # echo 0 00:04:09.788 06:44:30 -- setup/common.sh@33 -- # return 0 00:04:09.788 06:44:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.788 06:44:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.788 06:44:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.788 06:44:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.788 06:44:30 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:09.788 node0=512 expecting 512 00:04:09.788 06:44:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.788 06:44:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.788 06:44:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.788 06:44:30 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:09.788 node1=512 expecting 512 00:04:09.788 06:44:30 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:09.788 00:04:09.788 real 0m3.737s 00:04:09.788 user 0m1.421s 00:04:09.788 sys 0m2.386s 00:04:09.788 06:44:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.788 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:04:09.788 ************************************ 00:04:09.788 END TEST even_2G_alloc 00:04:09.788 ************************************ 00:04:09.788 06:44:30 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:09.788 06:44:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.788 06:44:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.788 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:04:09.788 ************************************ 00:04:09.788 START TEST odd_alloc 00:04:09.788 ************************************ 00:04:09.789 06:44:30 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:09.789 06:44:30 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:09.789 06:44:30 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:09.789 06:44:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.789 06:44:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.789 06:44:30 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:09.789 06:44:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.789 06:44:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.789 06:44:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.789 06:44:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:09.789 06:44:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.789 06:44:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.789 06:44:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.789 06:44:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.789 06:44:30 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:09.789 06:44:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.789 06:44:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:09.789 06:44:30 -- setup/hugepages.sh@83 -- # : 513 00:04:09.789 06:44:30 -- setup/hugepages.sh@84 -- # : 1 00:04:09.789 06:44:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.789 06:44:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:09.789 06:44:30 -- setup/hugepages.sh@83 -- # : 0 00:04:09.789 06:44:30 -- setup/hugepages.sh@84 -- # : 0 00:04:09.789 06:44:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.789 06:44:30 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:09.789 06:44:30 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:09.789 06:44:30 -- setup/hugepages.sh@160 -- # setup output 00:04:09.789 06:44:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.789 06:44:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:13.087 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.087 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:13.087 06:44:34 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:13.087 06:44:34 -- setup/hugepages.sh@89 -- # local node 00:04:13.087 06:44:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.087 06:44:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.087 06:44:34 -- setup/hugepages.sh@92 -- # local surp 00:04:13.087 06:44:34 -- setup/hugepages.sh@93 -- # local resv 00:04:13.087 06:44:34 -- setup/hugepages.sh@94 -- # local anon 00:04:13.087 06:44:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.087 06:44:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.087 06:44:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.087 06:44:34 -- setup/common.sh@18 -- # local node= 00:04:13.087 06:44:34 -- setup/common.sh@19 -- # local var val 00:04:13.087 06:44:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.087 06:44:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.087 06:44:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.087 06:44:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.087 06:44:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.087 06:44:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.087 06:44:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43674492 kB' 'MemAvailable: 47388992 kB' 'Buffers: 4100 kB' 'Cached: 10462408 kB' 'SwapCached: 0 kB' 'Active: 7274660 kB' 'Inactive: 3683044 kB' 'Active(anon): 6885996 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494508 kB' 'Mapped: 194940 kB' 'Shmem: 6394800 kB' 'KReclaimable: 279768 kB' 'Slab: 1043796 kB' 'SReclaimable: 279768 kB' 'SUnreclaim: 764028 kB' 'KernelStack: 22112 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 8056636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218124 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.087 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.087 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.088 06:44:34 -- setup/common.sh@33 -- # echo 0 00:04:13.088 06:44:34 -- setup/common.sh@33 -- # return 0 00:04:13.088 06:44:34 -- setup/hugepages.sh@97 -- # anon=0 00:04:13.088 06:44:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.088 06:44:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.088 06:44:34 -- setup/common.sh@18 -- # local node= 00:04:13.088 06:44:34 -- setup/common.sh@19 -- # local var val 00:04:13.088 06:44:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.088 06:44:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.088 06:44:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.088 06:44:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.088 06:44:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.088 06:44:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.088 06:44:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43676864 kB' 'MemAvailable: 47391364 kB' 'Buffers: 4100 kB' 'Cached: 10462412 kB' 'SwapCached: 0 kB' 'Active: 7274924 kB' 'Inactive: 3683044 kB' 'Active(anon): 6886260 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494792 kB' 'Mapped: 194932 kB' 'Shmem: 6394804 kB' 'KReclaimable: 279768 kB' 'Slab: 1043796 kB' 'SReclaimable: 279768 kB' 'SUnreclaim: 764028 kB' 'KernelStack: 21936 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 8056648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218092 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.088 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.088 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.089 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.089 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.090 06:44:34 -- setup/common.sh@33 -- # echo 0 00:04:13.090 06:44:34 -- setup/common.sh@33 -- # return 0 00:04:13.090 06:44:34 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.090 06:44:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.090 06:44:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.090 06:44:34 -- setup/common.sh@18 -- # local node= 00:04:13.090 06:44:34 -- setup/common.sh@19 -- # local var val 00:04:13.090 06:44:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.090 06:44:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.090 06:44:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.090 06:44:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.090 06:44:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.090 06:44:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43678796 kB' 'MemAvailable: 47393296 kB' 'Buffers: 4100 kB' 'Cached: 10462424 kB' 'SwapCached: 0 kB' 'Active: 7274152 kB' 'Inactive: 3683044 kB' 'Active(anon): 6885488 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493944 kB' 'Mapped: 194932 kB' 'Shmem: 6394816 kB' 'KReclaimable: 279768 kB' 'Slab: 1043828 kB' 'SReclaimable: 279768 kB' 'SUnreclaim: 764060 kB' 'KernelStack: 22000 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 8053256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218188 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.090 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.090 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.091 06:44:34 -- setup/common.sh@33 -- # echo 0 00:04:13.091 06:44:34 -- setup/common.sh@33 -- # return 0 00:04:13.091 06:44:34 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.091 06:44:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:13.091 nr_hugepages=1025 00:04:13.091 06:44:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.091 resv_hugepages=0 00:04:13.091 06:44:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.091 surplus_hugepages=0 00:04:13.091 06:44:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.091 anon_hugepages=0 00:04:13.091 06:44:34 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:13.091 06:44:34 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:13.091 06:44:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.091 06:44:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.091 06:44:34 -- setup/common.sh@18 -- # local node= 00:04:13.091 06:44:34 -- setup/common.sh@19 -- # local var val 00:04:13.091 06:44:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.091 06:44:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.091 06:44:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.091 06:44:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.091 06:44:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.091 06:44:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43680044 kB' 'MemAvailable: 47394544 kB' 'Buffers: 4100 kB' 'Cached: 10462448 kB' 'SwapCached: 0 kB' 'Active: 7273600 kB' 'Inactive: 3683044 kB' 'Active(anon): 6884936 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493364 kB' 'Mapped: 194932 kB' 'Shmem: 6394840 kB' 'KReclaimable: 279768 kB' 'Slab: 1043828 kB' 'SReclaimable: 279768 kB' 'SUnreclaim: 764060 kB' 'KernelStack: 21792 kB' 'PageTables: 7440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 8052116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.091 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.091 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.092 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.092 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.093 06:44:34 -- setup/common.sh@33 -- # echo 1025 00:04:13.093 06:44:34 -- setup/common.sh@33 -- # return 0 00:04:13.093 06:44:34 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:13.093 06:44:34 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.093 06:44:34 -- setup/hugepages.sh@27 -- # local node 00:04:13.093 06:44:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.093 06:44:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.093 06:44:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.093 06:44:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:13.093 06:44:34 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.093 06:44:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.093 06:44:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.093 06:44:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.093 06:44:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.093 06:44:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.093 06:44:34 -- setup/common.sh@18 -- # local node=0 00:04:13.093 06:44:34 -- setup/common.sh@19 -- # local var val 00:04:13.093 06:44:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.093 06:44:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.093 06:44:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.093 06:44:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.093 06:44:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.093 06:44:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 21032828 kB' 'MemUsed: 11601608 kB' 'SwapCached: 0 kB' 'Active: 6021492 kB' 'Inactive: 3487008 kB' 'Active(anon): 5898076 kB' 'Inactive(anon): 0 kB' 'Active(file): 123416 kB' 'Inactive(file): 3487008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9186624 kB' 'Mapped: 46400 kB' 'AnonPages: 325048 kB' 'Shmem: 5576200 kB' 'KernelStack: 12024 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102356 kB' 'Slab: 473696 kB' 'SReclaimable: 102356 kB' 'SUnreclaim: 371340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.093 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.093 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.094 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.094 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.094 06:44:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.094 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.094 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.094 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@33 -- # echo 0 00:04:13.354 06:44:34 -- setup/common.sh@33 -- # return 0 00:04:13.354 06:44:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.354 06:44:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.354 06:44:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.354 06:44:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.354 06:44:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.354 06:44:34 -- setup/common.sh@18 -- # local node=1 00:04:13.354 06:44:34 -- setup/common.sh@19 -- # local var val 00:04:13.354 06:44:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.354 06:44:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.354 06:44:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.354 06:44:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.354 06:44:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.354 06:44:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.354 06:44:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649344 kB' 'MemFree: 22647576 kB' 'MemUsed: 5001768 kB' 'SwapCached: 0 kB' 'Active: 1252080 kB' 'Inactive: 196036 kB' 'Active(anon): 986832 kB' 'Inactive(anon): 0 kB' 'Active(file): 265248 kB' 'Inactive(file): 196036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1279940 kB' 'Mapped: 148524 kB' 'AnonPages: 168292 kB' 'Shmem: 818656 kB' 'KernelStack: 9880 kB' 'PageTables: 3080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177412 kB' 'Slab: 570168 kB' 'SReclaimable: 177412 kB' 'SUnreclaim: 392756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.354 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.354 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # continue 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.355 06:44:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.355 06:44:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.355 06:44:34 -- setup/common.sh@33 -- # echo 0 00:04:13.355 06:44:34 -- setup/common.sh@33 -- # return 0 00:04:13.355 06:44:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.355 06:44:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.355 06:44:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.355 06:44:34 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:13.355 node0=512 expecting 513 00:04:13.355 06:44:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.355 06:44:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.355 06:44:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.355 06:44:34 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:13.355 node1=513 expecting 512 00:04:13.355 06:44:34 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:13.355 00:04:13.355 real 0m3.798s 00:04:13.355 user 0m1.452s 00:04:13.355 sys 0m2.418s 00:04:13.355 06:44:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:13.355 06:44:34 -- common/autotest_common.sh@10 -- # set +x 00:04:13.355 ************************************ 00:04:13.355 END TEST odd_alloc 00:04:13.355 ************************************ 00:04:13.355 06:44:34 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:13.355 06:44:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.355 06:44:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.355 06:44:34 -- common/autotest_common.sh@10 -- # set +x 00:04:13.355 ************************************ 00:04:13.355 START TEST custom_alloc 00:04:13.355 ************************************ 00:04:13.355 06:44:34 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:13.355 06:44:34 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:13.355 06:44:34 -- setup/hugepages.sh@169 -- # local node 00:04:13.355 06:44:34 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:13.355 06:44:34 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:13.355 06:44:34 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:13.355 06:44:34 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:13.355 06:44:34 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:13.355 06:44:34 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:13.355 06:44:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.355 06:44:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.355 06:44:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.355 06:44:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:13.355 06:44:34 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.355 06:44:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.355 06:44:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.355 06:44:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:13.355 06:44:34 -- setup/hugepages.sh@83 -- # : 256 00:04:13.355 06:44:34 -- setup/hugepages.sh@84 -- # : 1 00:04:13.355 06:44:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:13.355 06:44:34 -- setup/hugepages.sh@83 -- # : 0 00:04:13.355 06:44:34 -- setup/hugepages.sh@84 -- # : 0 00:04:13.355 06:44:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:13.355 06:44:34 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:13.355 06:44:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.355 06:44:34 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.355 06:44:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.355 06:44:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.355 06:44:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.355 06:44:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.355 06:44:34 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.355 06:44:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.355 06:44:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.355 06:44:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.355 06:44:34 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:13.355 06:44:34 -- setup/hugepages.sh@78 -- # return 0 00:04:13.355 06:44:34 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:13.355 06:44:34 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:13.355 06:44:34 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:13.355 06:44:34 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:13.355 06:44:34 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:13.355 06:44:34 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:13.355 06:44:34 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:13.355 06:44:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.355 06:44:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.355 06:44:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.355 06:44:34 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.356 06:44:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.356 06:44:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.356 06:44:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.356 06:44:34 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:13.356 06:44:34 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.356 06:44:34 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:13.356 06:44:34 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:13.356 06:44:34 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:13.356 06:44:34 -- setup/hugepages.sh@78 -- # return 0 00:04:13.356 06:44:34 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:13.356 06:44:34 -- setup/hugepages.sh@187 -- # setup output 00:04:13.356 06:44:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.356 06:44:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:16.744 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.744 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.007 06:44:38 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:17.007 06:44:38 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:17.007 06:44:38 -- setup/hugepages.sh@89 -- # local node 00:04:17.007 06:44:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.007 06:44:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.007 06:44:38 -- setup/hugepages.sh@92 -- # local surp 00:04:17.007 06:44:38 -- setup/hugepages.sh@93 -- # local resv 00:04:17.007 06:44:38 -- setup/hugepages.sh@94 -- # local anon 00:04:17.007 06:44:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.007 06:44:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.007 06:44:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.007 06:44:38 -- setup/common.sh@18 -- # local node= 00:04:17.007 06:44:38 -- setup/common.sh@19 -- # local var val 00:04:17.007 06:44:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.007 06:44:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.007 06:44:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.007 06:44:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.007 06:44:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.007 06:44:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.007 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.007 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.007 06:44:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 42646328 kB' 'MemAvailable: 46360828 kB' 'Buffers: 4100 kB' 'Cached: 10462544 kB' 'SwapCached: 0 kB' 'Active: 7274356 kB' 'Inactive: 3683044 kB' 'Active(anon): 6885692 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494012 kB' 'Mapped: 194912 kB' 'Shmem: 6394936 kB' 'KReclaimable: 279768 kB' 'Slab: 1043928 kB' 'SReclaimable: 279768 kB' 'SUnreclaim: 764160 kB' 'KernelStack: 21840 kB' 'PageTables: 7400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 8052316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:17.007 06:44:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.008 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.008 06:44:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.008 06:44:38 -- setup/common.sh@33 -- # echo 0 00:04:17.008 06:44:38 -- setup/common.sh@33 -- # return 0 00:04:17.008 06:44:38 -- setup/hugepages.sh@97 -- # anon=0 00:04:17.008 06:44:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.008 06:44:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.008 06:44:38 -- setup/common.sh@18 -- # local node= 00:04:17.008 06:44:38 -- setup/common.sh@19 -- # local var val 00:04:17.009 06:44:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.009 06:44:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.009 06:44:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.009 06:44:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.009 06:44:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.009 06:44:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 42647116 kB' 'MemAvailable: 46361616 kB' 'Buffers: 4100 kB' 'Cached: 10462552 kB' 'SwapCached: 0 kB' 'Active: 7274052 kB' 'Inactive: 3683044 kB' 'Active(anon): 6885388 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493676 kB' 'Mapped: 194896 kB' 'Shmem: 6394944 kB' 'KReclaimable: 279768 kB' 'Slab: 1044032 kB' 'SReclaimable: 279768 kB' 'SUnreclaim: 764264 kB' 'KernelStack: 21872 kB' 'PageTables: 7504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 8052332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.009 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.009 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.010 06:44:38 -- setup/common.sh@33 -- # echo 0 00:04:17.010 06:44:38 -- setup/common.sh@33 -- # return 0 00:04:17.010 06:44:38 -- setup/hugepages.sh@99 -- # surp=0 00:04:17.010 06:44:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.010 06:44:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.010 06:44:38 -- setup/common.sh@18 -- # local node= 00:04:17.010 06:44:38 -- setup/common.sh@19 -- # local var val 00:04:17.010 06:44:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.010 06:44:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.010 06:44:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.010 06:44:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.010 06:44:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.010 06:44:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 42648556 kB' 'MemAvailable: 46363056 kB' 'Buffers: 4100 kB' 'Cached: 10462568 kB' 'SwapCached: 0 kB' 'Active: 7274548 kB' 'Inactive: 3683044 kB' 'Active(anon): 6885884 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494200 kB' 'Mapped: 194896 kB' 'Shmem: 6394960 kB' 'KReclaimable: 279768 kB' 'Slab: 1044032 kB' 'SReclaimable: 279768 kB' 'SUnreclaim: 764264 kB' 'KernelStack: 21920 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 8052852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.010 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.010 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.011 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.011 06:44:38 -- setup/common.sh@33 -- # echo 0 00:04:17.011 06:44:38 -- setup/common.sh@33 -- # return 0 00:04:17.011 06:44:38 -- setup/hugepages.sh@100 -- # resv=0 00:04:17.011 06:44:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:17.011 nr_hugepages=1536 00:04:17.011 06:44:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.011 resv_hugepages=0 00:04:17.011 06:44:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.011 surplus_hugepages=0 00:04:17.011 06:44:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.011 anon_hugepages=0 00:04:17.011 06:44:38 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:17.011 06:44:38 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:17.011 06:44:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.011 06:44:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.011 06:44:38 -- setup/common.sh@18 -- # local node= 00:04:17.011 06:44:38 -- setup/common.sh@19 -- # local var val 00:04:17.011 06:44:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.011 06:44:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.011 06:44:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.011 06:44:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.011 06:44:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.011 06:44:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.011 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 42650436 kB' 'MemAvailable: 46364936 kB' 'Buffers: 4100 kB' 'Cached: 10462580 kB' 'SwapCached: 0 kB' 'Active: 7274564 kB' 'Inactive: 3683044 kB' 'Active(anon): 6885900 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494204 kB' 'Mapped: 194896 kB' 'Shmem: 6394972 kB' 'KReclaimable: 279768 kB' 'Slab: 1044032 kB' 'SReclaimable: 279768 kB' 'SUnreclaim: 764264 kB' 'KernelStack: 21920 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 8052864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.012 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.012 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.013 06:44:38 -- setup/common.sh@33 -- # echo 1536 00:04:17.013 06:44:38 -- setup/common.sh@33 -- # return 0 00:04:17.013 06:44:38 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:17.013 06:44:38 -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.013 06:44:38 -- setup/hugepages.sh@27 -- # local node 00:04:17.013 06:44:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.013 06:44:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.013 06:44:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.013 06:44:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.013 06:44:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.013 06:44:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.013 06:44:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.013 06:44:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.013 06:44:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.013 06:44:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.013 06:44:38 -- setup/common.sh@18 -- # local node=0 00:04:17.013 06:44:38 -- setup/common.sh@19 -- # local var val 00:04:17.013 06:44:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.013 06:44:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.013 06:44:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.013 06:44:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.013 06:44:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.013 06:44:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 21037040 kB' 'MemUsed: 11597396 kB' 'SwapCached: 0 kB' 'Active: 6022248 kB' 'Inactive: 3487008 kB' 'Active(anon): 5898832 kB' 'Inactive(anon): 0 kB' 'Active(file): 123416 kB' 'Inactive(file): 3487008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9186672 kB' 'Mapped: 46400 kB' 'AnonPages: 325740 kB' 'Shmem: 5576248 kB' 'KernelStack: 12056 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102356 kB' 'Slab: 474032 kB' 'SReclaimable: 102356 kB' 'SUnreclaim: 371676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.013 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.013 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@33 -- # echo 0 00:04:17.014 06:44:38 -- setup/common.sh@33 -- # return 0 00:04:17.014 06:44:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.014 06:44:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.014 06:44:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.014 06:44:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.014 06:44:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.014 06:44:38 -- setup/common.sh@18 -- # local node=1 00:04:17.014 06:44:38 -- setup/common.sh@19 -- # local var val 00:04:17.014 06:44:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.014 06:44:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.014 06:44:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.014 06:44:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.014 06:44:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.014 06:44:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649344 kB' 'MemFree: 21607084 kB' 'MemUsed: 6042260 kB' 'SwapCached: 0 kB' 'Active: 1257972 kB' 'Inactive: 196036 kB' 'Active(anon): 992724 kB' 'Inactive(anon): 0 kB' 'Active(file): 265248 kB' 'Inactive(file): 196036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1280024 kB' 'Mapped: 149000 kB' 'AnonPages: 174232 kB' 'Shmem: 818740 kB' 'KernelStack: 9864 kB' 'PageTables: 3144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177412 kB' 'Slab: 570000 kB' 'SReclaimable: 177412 kB' 'SUnreclaim: 392588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.014 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.014 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # continue 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.015 06:44:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.015 06:44:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.015 06:44:38 -- setup/common.sh@33 -- # echo 0 00:04:17.015 06:44:38 -- setup/common.sh@33 -- # return 0 00:04:17.015 06:44:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.015 06:44:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.015 06:44:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.015 06:44:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.015 06:44:38 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:17.015 node0=512 expecting 512 00:04:17.015 06:44:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.015 06:44:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.015 06:44:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.015 06:44:38 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:17.015 node1=1024 expecting 1024 00:04:17.015 06:44:38 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:17.015 00:04:17.015 real 0m3.811s 00:04:17.015 user 0m1.432s 00:04:17.015 sys 0m2.451s 00:04:17.015 06:44:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.015 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:04:17.015 ************************************ 00:04:17.015 END TEST custom_alloc 00:04:17.015 ************************************ 00:04:17.275 06:44:38 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:17.275 06:44:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.275 06:44:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.275 06:44:38 -- common/autotest_common.sh@10 -- # set +x 00:04:17.275 ************************************ 00:04:17.275 START TEST no_shrink_alloc 00:04:17.275 ************************************ 00:04:17.275 06:44:38 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:17.275 06:44:38 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:17.275 06:44:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.275 06:44:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:17.275 06:44:38 -- setup/hugepages.sh@51 -- # shift 00:04:17.275 06:44:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:17.275 06:44:38 -- setup/hugepages.sh@52 -- # local node_ids 00:04:17.275 06:44:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.275 06:44:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.275 06:44:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:17.275 06:44:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:17.275 06:44:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.275 06:44:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.275 06:44:38 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.275 06:44:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.275 06:44:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.275 06:44:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:17.275 06:44:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:17.275 06:44:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:17.275 06:44:38 -- setup/hugepages.sh@73 -- # return 0 00:04:17.275 06:44:38 -- setup/hugepages.sh@198 -- # setup output 00:04:17.275 06:44:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.275 06:44:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:20.566 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.566 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:20.830 06:44:42 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:20.830 06:44:42 -- setup/hugepages.sh@89 -- # local node 00:04:20.830 06:44:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.830 06:44:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.830 06:44:42 -- setup/hugepages.sh@92 -- # local surp 00:04:20.830 06:44:42 -- setup/hugepages.sh@93 -- # local resv 00:04:20.830 06:44:42 -- setup/hugepages.sh@94 -- # local anon 00:04:20.830 06:44:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.830 06:44:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.830 06:44:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.830 06:44:42 -- setup/common.sh@18 -- # local node= 00:04:20.830 06:44:42 -- setup/common.sh@19 -- # local var val 00:04:20.830 06:44:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.830 06:44:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.830 06:44:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.830 06:44:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.830 06:44:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.830 06:44:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43678608 kB' 'MemAvailable: 47393076 kB' 'Buffers: 4100 kB' 'Cached: 10462684 kB' 'SwapCached: 0 kB' 'Active: 7276396 kB' 'Inactive: 3683044 kB' 'Active(anon): 6887732 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495444 kB' 'Mapped: 194940 kB' 'Shmem: 6395076 kB' 'KReclaimable: 279704 kB' 'Slab: 1043108 kB' 'SReclaimable: 279704 kB' 'SUnreclaim: 763404 kB' 'KernelStack: 21984 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8056524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218028 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.830 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.830 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.831 06:44:42 -- setup/common.sh@33 -- # echo 0 00:04:20.831 06:44:42 -- setup/common.sh@33 -- # return 0 00:04:20.831 06:44:42 -- setup/hugepages.sh@97 -- # anon=0 00:04:20.831 06:44:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.831 06:44:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.831 06:44:42 -- setup/common.sh@18 -- # local node= 00:04:20.831 06:44:42 -- setup/common.sh@19 -- # local var val 00:04:20.831 06:44:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.831 06:44:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.831 06:44:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.831 06:44:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.831 06:44:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.831 06:44:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43676904 kB' 'MemAvailable: 47391356 kB' 'Buffers: 4100 kB' 'Cached: 10462688 kB' 'SwapCached: 0 kB' 'Active: 7276252 kB' 'Inactive: 3683044 kB' 'Active(anon): 6887588 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495792 kB' 'Mapped: 194940 kB' 'Shmem: 6395080 kB' 'KReclaimable: 279672 kB' 'Slab: 1043116 kB' 'SReclaimable: 279672 kB' 'SUnreclaim: 763444 kB' 'KernelStack: 22112 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8058056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218092 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.831 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.831 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.832 06:44:42 -- setup/common.sh@33 -- # echo 0 00:04:20.832 06:44:42 -- setup/common.sh@33 -- # return 0 00:04:20.832 06:44:42 -- setup/hugepages.sh@99 -- # surp=0 00:04:20.832 06:44:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.832 06:44:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.832 06:44:42 -- setup/common.sh@18 -- # local node= 00:04:20.832 06:44:42 -- setup/common.sh@19 -- # local var val 00:04:20.832 06:44:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.832 06:44:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.832 06:44:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.832 06:44:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.832 06:44:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.832 06:44:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43680716 kB' 'MemAvailable: 47395168 kB' 'Buffers: 4100 kB' 'Cached: 10462700 kB' 'SwapCached: 0 kB' 'Active: 7276780 kB' 'Inactive: 3683044 kB' 'Active(anon): 6888116 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496300 kB' 'Mapped: 194920 kB' 'Shmem: 6395092 kB' 'KReclaimable: 279672 kB' 'Slab: 1043116 kB' 'SReclaimable: 279672 kB' 'SUnreclaim: 763444 kB' 'KernelStack: 22144 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8058072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218156 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.832 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.832 06:44:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.833 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.833 06:44:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.834 06:44:42 -- setup/common.sh@33 -- # echo 0 00:04:20.834 06:44:42 -- setup/common.sh@33 -- # return 0 00:04:20.834 06:44:42 -- setup/hugepages.sh@100 -- # resv=0 00:04:20.834 06:44:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:20.834 nr_hugepages=1024 00:04:20.834 06:44:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.834 resv_hugepages=0 00:04:20.834 06:44:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.834 surplus_hugepages=0 00:04:20.834 06:44:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.834 anon_hugepages=0 00:04:20.834 06:44:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.834 06:44:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:20.834 06:44:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.834 06:44:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.834 06:44:42 -- setup/common.sh@18 -- # local node= 00:04:20.834 06:44:42 -- setup/common.sh@19 -- # local var val 00:04:20.834 06:44:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.834 06:44:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.834 06:44:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.834 06:44:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.834 06:44:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.834 06:44:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43681944 kB' 'MemAvailable: 47396396 kB' 'Buffers: 4100 kB' 'Cached: 10462700 kB' 'SwapCached: 0 kB' 'Active: 7276296 kB' 'Inactive: 3683044 kB' 'Active(anon): 6887632 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495796 kB' 'Mapped: 194920 kB' 'Shmem: 6395092 kB' 'KReclaimable: 279672 kB' 'Slab: 1043116 kB' 'SReclaimable: 279672 kB' 'SUnreclaim: 763444 kB' 'KernelStack: 22080 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8058088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218124 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.834 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.834 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.835 06:44:42 -- setup/common.sh@33 -- # echo 1024 00:04:20.835 06:44:42 -- setup/common.sh@33 -- # return 0 00:04:20.835 06:44:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.835 06:44:42 -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.835 06:44:42 -- setup/hugepages.sh@27 -- # local node 00:04:20.835 06:44:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.835 06:44:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:20.835 06:44:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.835 06:44:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:20.835 06:44:42 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.835 06:44:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.835 06:44:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.835 06:44:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.835 06:44:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.835 06:44:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.835 06:44:42 -- setup/common.sh@18 -- # local node=0 00:04:20.835 06:44:42 -- setup/common.sh@19 -- # local var val 00:04:20.835 06:44:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.835 06:44:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.835 06:44:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.835 06:44:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.835 06:44:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.835 06:44:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.835 06:44:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 19988716 kB' 'MemUsed: 12645720 kB' 'SwapCached: 0 kB' 'Active: 6022520 kB' 'Inactive: 3487008 kB' 'Active(anon): 5899104 kB' 'Inactive(anon): 0 kB' 'Active(file): 123416 kB' 'Inactive(file): 3487008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9186732 kB' 'Mapped: 46400 kB' 'AnonPages: 325992 kB' 'Shmem: 5576308 kB' 'KernelStack: 12024 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102292 kB' 'Slab: 473768 kB' 'SReclaimable: 102292 kB' 'SUnreclaim: 371476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.835 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.835 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # continue 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 06:44:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.836 06:44:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.836 06:44:42 -- setup/common.sh@33 -- # echo 0 00:04:20.836 06:44:42 -- setup/common.sh@33 -- # return 0 00:04:20.836 06:44:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.836 06:44:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.836 06:44:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.836 06:44:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.836 06:44:42 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:20.836 node0=1024 expecting 1024 00:04:20.836 06:44:42 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:20.836 06:44:42 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:20.836 06:44:42 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:20.836 06:44:42 -- setup/hugepages.sh@202 -- # setup output 00:04:20.836 06:44:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.836 06:44:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:25.034 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.034 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.035 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.035 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:25.035 06:44:45 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:25.035 06:44:45 -- setup/hugepages.sh@89 -- # local node 00:04:25.035 06:44:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.035 06:44:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.035 06:44:45 -- setup/hugepages.sh@92 -- # local surp 00:04:25.035 06:44:45 -- setup/hugepages.sh@93 -- # local resv 00:04:25.035 06:44:45 -- setup/hugepages.sh@94 -- # local anon 00:04:25.035 06:44:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.035 06:44:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.035 06:44:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.035 06:44:45 -- setup/common.sh@18 -- # local node= 00:04:25.035 06:44:45 -- setup/common.sh@19 -- # local var val 00:04:25.035 06:44:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.035 06:44:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.035 06:44:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.035 06:44:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.035 06:44:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.035 06:44:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43691924 kB' 'MemAvailable: 47406376 kB' 'Buffers: 4100 kB' 'Cached: 10462796 kB' 'SwapCached: 0 kB' 'Active: 7276364 kB' 'Inactive: 3683044 kB' 'Active(anon): 6887700 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495848 kB' 'Mapped: 194936 kB' 'Shmem: 6395188 kB' 'KReclaimable: 279672 kB' 'Slab: 1043268 kB' 'SReclaimable: 279672 kB' 'SUnreclaim: 763596 kB' 'KernelStack: 21936 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8054424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217964 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.035 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.035 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.036 06:44:45 -- setup/common.sh@33 -- # echo 0 00:04:25.036 06:44:45 -- setup/common.sh@33 -- # return 0 00:04:25.036 06:44:45 -- setup/hugepages.sh@97 -- # anon=0 00:04:25.036 06:44:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.036 06:44:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.036 06:44:45 -- setup/common.sh@18 -- # local node= 00:04:25.036 06:44:45 -- setup/common.sh@19 -- # local var val 00:04:25.036 06:44:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.036 06:44:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.036 06:44:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.036 06:44:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.036 06:44:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.036 06:44:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43692996 kB' 'MemAvailable: 47407448 kB' 'Buffers: 4100 kB' 'Cached: 10462800 kB' 'SwapCached: 0 kB' 'Active: 7276044 kB' 'Inactive: 3683044 kB' 'Active(anon): 6887380 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495596 kB' 'Mapped: 194908 kB' 'Shmem: 6395192 kB' 'KReclaimable: 279672 kB' 'Slab: 1043268 kB' 'SReclaimable: 279672 kB' 'SUnreclaim: 763596 kB' 'KernelStack: 21920 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8054436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.036 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.036 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:45 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.037 06:44:46 -- setup/common.sh@33 -- # echo 0 00:04:25.037 06:44:46 -- setup/common.sh@33 -- # return 0 00:04:25.037 06:44:46 -- setup/hugepages.sh@99 -- # surp=0 00:04:25.037 06:44:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.037 06:44:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.037 06:44:46 -- setup/common.sh@18 -- # local node= 00:04:25.037 06:44:46 -- setup/common.sh@19 -- # local var val 00:04:25.037 06:44:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.037 06:44:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.037 06:44:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.037 06:44:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.037 06:44:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.037 06:44:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43692996 kB' 'MemAvailable: 47407448 kB' 'Buffers: 4100 kB' 'Cached: 10462812 kB' 'SwapCached: 0 kB' 'Active: 7276068 kB' 'Inactive: 3683044 kB' 'Active(anon): 6887404 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495592 kB' 'Mapped: 194908 kB' 'Shmem: 6395204 kB' 'KReclaimable: 279672 kB' 'Slab: 1043268 kB' 'SReclaimable: 279672 kB' 'SUnreclaim: 763596 kB' 'KernelStack: 21920 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8054452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.037 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.037 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.038 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.038 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.039 06:44:46 -- setup/common.sh@33 -- # echo 0 00:04:25.039 06:44:46 -- setup/common.sh@33 -- # return 0 00:04:25.039 06:44:46 -- setup/hugepages.sh@100 -- # resv=0 00:04:25.039 06:44:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.039 nr_hugepages=1024 00:04:25.039 06:44:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.039 resv_hugepages=0 00:04:25.039 06:44:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.039 surplus_hugepages=0 00:04:25.039 06:44:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.039 anon_hugepages=0 00:04:25.039 06:44:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.039 06:44:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.039 06:44:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.039 06:44:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.039 06:44:46 -- setup/common.sh@18 -- # local node= 00:04:25.039 06:44:46 -- setup/common.sh@19 -- # local var val 00:04:25.039 06:44:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.039 06:44:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.039 06:44:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.039 06:44:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.039 06:44:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.039 06:44:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283780 kB' 'MemFree: 43692996 kB' 'MemAvailable: 47407448 kB' 'Buffers: 4100 kB' 'Cached: 10462812 kB' 'SwapCached: 0 kB' 'Active: 7276068 kB' 'Inactive: 3683044 kB' 'Active(anon): 6887404 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3683044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495592 kB' 'Mapped: 194908 kB' 'Shmem: 6395204 kB' 'KReclaimable: 279672 kB' 'Slab: 1043268 kB' 'SReclaimable: 279672 kB' 'SUnreclaim: 763596 kB' 'KernelStack: 21920 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8054468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1953140 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 51380224 kB' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.039 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.039 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.040 06:44:46 -- setup/common.sh@33 -- # echo 1024 00:04:25.040 06:44:46 -- setup/common.sh@33 -- # return 0 00:04:25.040 06:44:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.040 06:44:46 -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.040 06:44:46 -- setup/hugepages.sh@27 -- # local node 00:04:25.040 06:44:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.040 06:44:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.040 06:44:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.040 06:44:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:25.040 06:44:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.040 06:44:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.040 06:44:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.040 06:44:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.040 06:44:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.040 06:44:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.040 06:44:46 -- setup/common.sh@18 -- # local node=0 00:04:25.040 06:44:46 -- setup/common.sh@19 -- # local var val 00:04:25.040 06:44:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.040 06:44:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.040 06:44:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.040 06:44:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.040 06:44:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.040 06:44:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 19993600 kB' 'MemUsed: 12640836 kB' 'SwapCached: 0 kB' 'Active: 6022508 kB' 'Inactive: 3487008 kB' 'Active(anon): 5899092 kB' 'Inactive(anon): 0 kB' 'Active(file): 123416 kB' 'Inactive(file): 3487008 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9186824 kB' 'Mapped: 46400 kB' 'AnonPages: 325932 kB' 'Shmem: 5576400 kB' 'KernelStack: 12040 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102292 kB' 'Slab: 473688 kB' 'SReclaimable: 102292 kB' 'SUnreclaim: 371396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.040 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.040 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # continue 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.041 06:44:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.041 06:44:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.041 06:44:46 -- setup/common.sh@33 -- # echo 0 00:04:25.041 06:44:46 -- setup/common.sh@33 -- # return 0 00:04:25.041 06:44:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.041 06:44:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.041 06:44:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.041 06:44:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.041 06:44:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.041 node0=1024 expecting 1024 00:04:25.041 06:44:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.041 00:04:25.041 real 0m7.415s 00:04:25.041 user 0m2.738s 00:04:25.041 sys 0m4.820s 00:04:25.041 06:44:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.041 06:44:46 -- common/autotest_common.sh@10 -- # set +x 00:04:25.041 ************************************ 00:04:25.041 END TEST no_shrink_alloc 00:04:25.041 ************************************ 00:04:25.041 06:44:46 -- setup/hugepages.sh@217 -- # clear_hp 00:04:25.041 06:44:46 -- setup/hugepages.sh@37 -- # local node hp 00:04:25.041 06:44:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.041 06:44:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.041 06:44:46 -- setup/hugepages.sh@41 -- # echo 0 00:04:25.041 06:44:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.041 06:44:46 -- setup/hugepages.sh@41 -- # echo 0 00:04:25.041 06:44:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.041 06:44:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.041 06:44:46 -- setup/hugepages.sh@41 -- # echo 0 00:04:25.041 06:44:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.041 06:44:46 -- setup/hugepages.sh@41 -- # echo 0 00:04:25.041 06:44:46 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:25.041 06:44:46 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:25.041 00:04:25.041 real 0m28.940s 00:04:25.042 user 0m10.162s 00:04:25.042 sys 0m17.395s 00:04:25.042 06:44:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.042 06:44:46 -- common/autotest_common.sh@10 -- # set +x 00:04:25.042 ************************************ 00:04:25.042 END TEST hugepages 00:04:25.042 ************************************ 00:04:25.042 06:44:46 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:25.042 06:44:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.042 06:44:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.042 06:44:46 -- common/autotest_common.sh@10 -- # set +x 00:04:25.042 ************************************ 00:04:25.042 START TEST driver 00:04:25.042 ************************************ 00:04:25.042 06:44:46 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:25.042 * Looking for test storage... 00:04:25.042 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:25.042 06:44:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:25.042 06:44:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:25.042 06:44:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:25.042 06:44:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:25.042 06:44:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:25.042 06:44:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:25.042 06:44:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:25.042 06:44:46 -- scripts/common.sh@335 -- # IFS=.-: 00:04:25.042 06:44:46 -- scripts/common.sh@335 -- # read -ra ver1 00:04:25.042 06:44:46 -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.042 06:44:46 -- scripts/common.sh@336 -- # read -ra ver2 00:04:25.042 06:44:46 -- scripts/common.sh@337 -- # local 'op=<' 00:04:25.042 06:44:46 -- scripts/common.sh@339 -- # ver1_l=2 00:04:25.042 06:44:46 -- scripts/common.sh@340 -- # ver2_l=1 00:04:25.042 06:44:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:25.042 06:44:46 -- scripts/common.sh@343 -- # case "$op" in 00:04:25.042 06:44:46 -- scripts/common.sh@344 -- # : 1 00:04:25.042 06:44:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:25.042 06:44:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.042 06:44:46 -- scripts/common.sh@364 -- # decimal 1 00:04:25.042 06:44:46 -- scripts/common.sh@352 -- # local d=1 00:04:25.042 06:44:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.042 06:44:46 -- scripts/common.sh@354 -- # echo 1 00:04:25.042 06:44:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:25.042 06:44:46 -- scripts/common.sh@365 -- # decimal 2 00:04:25.042 06:44:46 -- scripts/common.sh@352 -- # local d=2 00:04:25.042 06:44:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.042 06:44:46 -- scripts/common.sh@354 -- # echo 2 00:04:25.042 06:44:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:25.042 06:44:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:25.042 06:44:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:25.042 06:44:46 -- scripts/common.sh@367 -- # return 0 00:04:25.042 06:44:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.042 06:44:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:25.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.042 --rc genhtml_branch_coverage=1 00:04:25.042 --rc genhtml_function_coverage=1 00:04:25.042 --rc genhtml_legend=1 00:04:25.042 --rc geninfo_all_blocks=1 00:04:25.042 --rc geninfo_unexecuted_blocks=1 00:04:25.042 00:04:25.042 ' 00:04:25.042 06:44:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:25.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.042 --rc genhtml_branch_coverage=1 00:04:25.042 --rc genhtml_function_coverage=1 00:04:25.042 --rc genhtml_legend=1 00:04:25.042 --rc geninfo_all_blocks=1 00:04:25.042 --rc geninfo_unexecuted_blocks=1 00:04:25.042 00:04:25.042 ' 00:04:25.042 06:44:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:25.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.042 --rc genhtml_branch_coverage=1 00:04:25.042 --rc genhtml_function_coverage=1 00:04:25.042 --rc genhtml_legend=1 00:04:25.042 --rc geninfo_all_blocks=1 00:04:25.042 --rc geninfo_unexecuted_blocks=1 00:04:25.042 00:04:25.042 ' 00:04:25.042 06:44:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:25.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.042 --rc genhtml_branch_coverage=1 00:04:25.042 --rc genhtml_function_coverage=1 00:04:25.042 --rc genhtml_legend=1 00:04:25.042 --rc geninfo_all_blocks=1 00:04:25.042 --rc geninfo_unexecuted_blocks=1 00:04:25.042 00:04:25.042 ' 00:04:25.042 06:44:46 -- setup/driver.sh@68 -- # setup reset 00:04:25.042 06:44:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.042 06:44:46 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.320 06:44:51 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:30.320 06:44:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.320 06:44:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.320 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:04:30.320 ************************************ 00:04:30.320 START TEST guess_driver 00:04:30.320 ************************************ 00:04:30.320 06:44:51 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:30.320 06:44:51 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:30.320 06:44:51 -- setup/driver.sh@47 -- # local fail=0 00:04:30.320 06:44:51 -- setup/driver.sh@49 -- # pick_driver 00:04:30.320 06:44:51 -- setup/driver.sh@36 -- # vfio 00:04:30.320 06:44:51 -- setup/driver.sh@21 -- # local iommu_grups 00:04:30.320 06:44:51 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:30.320 06:44:51 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:30.320 06:44:51 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:30.320 06:44:51 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:30.320 06:44:51 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:30.320 06:44:51 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:30.320 06:44:51 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:30.320 06:44:51 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:30.320 06:44:51 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:30.320 06:44:51 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:30.320 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:30.320 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:30.320 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:30.320 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:30.320 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:30.320 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:30.320 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:30.320 06:44:51 -- setup/driver.sh@30 -- # return 0 00:04:30.320 06:44:51 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:30.320 06:44:51 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:30.320 06:44:51 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:30.320 06:44:51 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:30.320 Looking for driver=vfio-pci 00:04:30.320 06:44:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.320 06:44:51 -- setup/driver.sh@45 -- # setup output config 00:04:30.320 06:44:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.320 06:44:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:33.612 06:44:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:54 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.612 06:44:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:54 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.612 06:44:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:54 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.612 06:44:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:54 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.612 06:44:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:54 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.612 06:44:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:54 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.612 06:44:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:54 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.612 06:44:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:54 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.612 06:44:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.612 06:44:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.612 06:44:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.612 06:44:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.612 06:44:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.612 06:44:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.612 06:44:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.613 06:44:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.613 06:44:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.613 06:44:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.613 06:44:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.613 06:44:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.613 06:44:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.613 06:44:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.613 06:44:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.613 06:44:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.517 06:44:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:35.517 06:44:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:35.517 06:44:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.517 06:44:57 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:35.517 06:44:57 -- setup/driver.sh@65 -- # setup reset 00:04:35.517 06:44:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:35.517 06:44:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:40.791 00:04:40.791 real 0m10.775s 00:04:40.791 user 0m2.779s 00:04:40.791 sys 0m5.303s 00:04:40.791 06:45:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.791 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:04:40.791 ************************************ 00:04:40.791 END TEST guess_driver 00:04:40.791 ************************************ 00:04:40.791 00:04:40.791 real 0m16.049s 00:04:40.791 user 0m4.290s 00:04:40.791 sys 0m8.239s 00:04:40.791 06:45:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.791 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:04:40.791 ************************************ 00:04:40.791 END TEST driver 00:04:40.791 ************************************ 00:04:40.791 06:45:02 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:40.791 06:45:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.791 06:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.792 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:04:40.792 ************************************ 00:04:40.792 START TEST devices 00:04:40.792 ************************************ 00:04:40.792 06:45:02 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:40.792 * Looking for test storage... 00:04:40.792 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:40.792 06:45:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:40.792 06:45:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:40.792 06:45:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:41.051 06:45:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:41.051 06:45:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:41.051 06:45:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:41.051 06:45:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:41.051 06:45:02 -- scripts/common.sh@335 -- # IFS=.-: 00:04:41.051 06:45:02 -- scripts/common.sh@335 -- # read -ra ver1 00:04:41.051 06:45:02 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.051 06:45:02 -- scripts/common.sh@336 -- # read -ra ver2 00:04:41.051 06:45:02 -- scripts/common.sh@337 -- # local 'op=<' 00:04:41.051 06:45:02 -- scripts/common.sh@339 -- # ver1_l=2 00:04:41.051 06:45:02 -- scripts/common.sh@340 -- # ver2_l=1 00:04:41.051 06:45:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:41.051 06:45:02 -- scripts/common.sh@343 -- # case "$op" in 00:04:41.051 06:45:02 -- scripts/common.sh@344 -- # : 1 00:04:41.051 06:45:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:41.051 06:45:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.051 06:45:02 -- scripts/common.sh@364 -- # decimal 1 00:04:41.051 06:45:02 -- scripts/common.sh@352 -- # local d=1 00:04:41.051 06:45:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.051 06:45:02 -- scripts/common.sh@354 -- # echo 1 00:04:41.051 06:45:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:41.051 06:45:02 -- scripts/common.sh@365 -- # decimal 2 00:04:41.051 06:45:02 -- scripts/common.sh@352 -- # local d=2 00:04:41.051 06:45:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.051 06:45:02 -- scripts/common.sh@354 -- # echo 2 00:04:41.051 06:45:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:41.051 06:45:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:41.051 06:45:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:41.051 06:45:02 -- scripts/common.sh@367 -- # return 0 00:04:41.051 06:45:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.051 06:45:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:41.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.051 --rc genhtml_branch_coverage=1 00:04:41.051 --rc genhtml_function_coverage=1 00:04:41.051 --rc genhtml_legend=1 00:04:41.051 --rc geninfo_all_blocks=1 00:04:41.051 --rc geninfo_unexecuted_blocks=1 00:04:41.051 00:04:41.051 ' 00:04:41.051 06:45:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:41.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.051 --rc genhtml_branch_coverage=1 00:04:41.051 --rc genhtml_function_coverage=1 00:04:41.051 --rc genhtml_legend=1 00:04:41.051 --rc geninfo_all_blocks=1 00:04:41.051 --rc geninfo_unexecuted_blocks=1 00:04:41.051 00:04:41.051 ' 00:04:41.051 06:45:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:41.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.051 --rc genhtml_branch_coverage=1 00:04:41.051 --rc genhtml_function_coverage=1 00:04:41.051 --rc genhtml_legend=1 00:04:41.051 --rc geninfo_all_blocks=1 00:04:41.051 --rc geninfo_unexecuted_blocks=1 00:04:41.051 00:04:41.051 ' 00:04:41.051 06:45:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:41.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.051 --rc genhtml_branch_coverage=1 00:04:41.051 --rc genhtml_function_coverage=1 00:04:41.051 --rc genhtml_legend=1 00:04:41.051 --rc geninfo_all_blocks=1 00:04:41.051 --rc geninfo_unexecuted_blocks=1 00:04:41.051 00:04:41.051 ' 00:04:41.051 06:45:02 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:41.051 06:45:02 -- setup/devices.sh@192 -- # setup reset 00:04:41.051 06:45:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.051 06:45:02 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.248 06:45:06 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:45.248 06:45:06 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:45.248 06:45:06 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:45.248 06:45:06 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:45.248 06:45:06 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:45.248 06:45:06 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:45.248 06:45:06 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:45.248 06:45:06 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:45.248 06:45:06 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:45.248 06:45:06 -- setup/devices.sh@196 -- # blocks=() 00:04:45.248 06:45:06 -- setup/devices.sh@196 -- # declare -a blocks 00:04:45.248 06:45:06 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:45.248 06:45:06 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:45.248 06:45:06 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:45.248 06:45:06 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:45.248 06:45:06 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:45.248 06:45:06 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:45.248 06:45:06 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:45.248 06:45:06 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:45.248 06:45:06 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:45.248 06:45:06 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:45.248 06:45:06 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:45.248 No valid GPT data, bailing 00:04:45.248 06:45:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:45.248 06:45:06 -- scripts/common.sh@393 -- # pt= 00:04:45.248 06:45:06 -- scripts/common.sh@394 -- # return 1 00:04:45.248 06:45:06 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:45.248 06:45:06 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:45.248 06:45:06 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:45.248 06:45:06 -- setup/common.sh@80 -- # echo 2000398934016 00:04:45.248 06:45:06 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:45.248 06:45:06 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:45.248 06:45:06 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:45.248 06:45:06 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:45.248 06:45:06 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:45.248 06:45:06 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:45.248 06:45:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.248 06:45:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.248 06:45:06 -- common/autotest_common.sh@10 -- # set +x 00:04:45.248 ************************************ 00:04:45.248 START TEST nvme_mount 00:04:45.248 ************************************ 00:04:45.248 06:45:06 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:45.248 06:45:06 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:45.248 06:45:06 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:45.248 06:45:06 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.248 06:45:06 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.248 06:45:06 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:45.248 06:45:06 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:45.248 06:45:06 -- setup/common.sh@40 -- # local part_no=1 00:04:45.248 06:45:06 -- setup/common.sh@41 -- # local size=1073741824 00:04:45.248 06:45:06 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:45.248 06:45:06 -- setup/common.sh@44 -- # parts=() 00:04:45.248 06:45:06 -- setup/common.sh@44 -- # local parts 00:04:45.248 06:45:06 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:45.248 06:45:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.248 06:45:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:45.248 06:45:06 -- setup/common.sh@46 -- # (( part++ )) 00:04:45.248 06:45:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.248 06:45:06 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:45.248 06:45:06 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:45.248 06:45:06 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:46.188 Creating new GPT entries in memory. 00:04:46.188 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:46.188 other utilities. 00:04:46.188 06:45:07 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:46.188 06:45:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.188 06:45:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:46.188 06:45:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:46.188 06:45:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:47.127 Creating new GPT entries in memory. 00:04:47.127 The operation has completed successfully. 00:04:47.127 06:45:08 -- setup/common.sh@57 -- # (( part++ )) 00:04:47.127 06:45:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.127 06:45:08 -- setup/common.sh@62 -- # wait 1159194 00:04:47.127 06:45:08 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.127 06:45:08 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:47.127 06:45:08 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.127 06:45:08 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:47.127 06:45:08 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:47.127 06:45:08 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.127 06:45:08 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.127 06:45:08 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:47.127 06:45:08 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:47.127 06:45:08 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.127 06:45:08 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.127 06:45:08 -- setup/devices.sh@53 -- # local found=0 00:04:47.127 06:45:08 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.127 06:45:08 -- setup/devices.sh@56 -- # : 00:04:47.127 06:45:08 -- setup/devices.sh@59 -- # local pci status 00:04:47.127 06:45:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.127 06:45:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:47.127 06:45:08 -- setup/devices.sh@47 -- # setup output config 00:04:47.127 06:45:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.127 06:45:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:50.418 06:45:11 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:11 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:50.418 06:45:11 -- setup/devices.sh@63 -- # found=1 00:04:50.418 06:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.418 06:45:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:50.418 06:45:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.677 06:45:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.677 06:45:12 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:50.678 06:45:12 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.678 06:45:12 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:50.678 06:45:12 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.678 06:45:12 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:50.678 06:45:12 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.678 06:45:12 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.678 06:45:12 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.678 06:45:12 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.678 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.678 06:45:12 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.678 06:45:12 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.937 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:50.937 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:50.937 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:50.937 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:50.937 06:45:12 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:50.937 06:45:12 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:50.937 06:45:12 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.937 06:45:12 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:50.937 06:45:12 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:50.937 06:45:12 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.937 06:45:12 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.937 06:45:12 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:50.937 06:45:12 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:50.937 06:45:12 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.937 06:45:12 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.937 06:45:12 -- setup/devices.sh@53 -- # local found=0 00:04:50.937 06:45:12 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:50.937 06:45:12 -- setup/devices.sh@56 -- # : 00:04:51.196 06:45:12 -- setup/devices.sh@59 -- # local pci status 00:04:51.196 06:45:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.196 06:45:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:51.196 06:45:12 -- setup/devices.sh@47 -- # setup output config 00:04:51.196 06:45:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.196 06:45:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:54.489 06:45:15 -- setup/devices.sh@63 -- # found=1 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.489 06:45:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.489 06:45:16 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:54.489 06:45:16 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.489 06:45:16 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.489 06:45:16 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.489 06:45:16 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.489 06:45:16 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:54.489 06:45:16 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:54.489 06:45:16 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:54.489 06:45:16 -- setup/devices.sh@50 -- # local mount_point= 00:04:54.489 06:45:16 -- setup/devices.sh@51 -- # local test_file= 00:04:54.489 06:45:16 -- setup/devices.sh@53 -- # local found=0 00:04:54.489 06:45:16 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:54.489 06:45:16 -- setup/devices.sh@59 -- # local pci status 00:04:54.489 06:45:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.489 06:45:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:54.489 06:45:16 -- setup/devices.sh@47 -- # setup output config 00:04:54.489 06:45:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.489 06:45:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:58.749 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.749 06:45:19 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:58.749 06:45:19 -- setup/devices.sh@63 -- # found=1 00:04:58.749 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.749 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.749 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.749 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.749 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.749 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.749 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 06:45:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.750 06:45:19 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:58.750 06:45:19 -- setup/devices.sh@68 -- # return 0 00:04:58.750 06:45:19 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:58.750 06:45:19 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.750 06:45:19 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.750 06:45:19 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.750 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.750 00:04:58.750 real 0m13.223s 00:04:58.750 user 0m3.910s 00:04:58.750 sys 0m7.270s 00:04:58.750 06:45:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.750 06:45:19 -- common/autotest_common.sh@10 -- # set +x 00:04:58.750 ************************************ 00:04:58.750 END TEST nvme_mount 00:04:58.750 ************************************ 00:04:58.750 06:45:19 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:58.750 06:45:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.750 06:45:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.750 06:45:19 -- common/autotest_common.sh@10 -- # set +x 00:04:58.750 ************************************ 00:04:58.750 START TEST dm_mount 00:04:58.750 ************************************ 00:04:58.750 06:45:19 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:58.750 06:45:19 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:58.750 06:45:19 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:58.750 06:45:19 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:58.750 06:45:19 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:58.750 06:45:19 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:58.750 06:45:19 -- setup/common.sh@40 -- # local part_no=2 00:04:58.750 06:45:19 -- setup/common.sh@41 -- # local size=1073741824 00:04:58.750 06:45:19 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:58.750 06:45:19 -- setup/common.sh@44 -- # parts=() 00:04:58.750 06:45:19 -- setup/common.sh@44 -- # local parts 00:04:58.750 06:45:19 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:58.750 06:45:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.750 06:45:19 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.750 06:45:19 -- setup/common.sh@46 -- # (( part++ )) 00:04:58.750 06:45:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.750 06:45:19 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.750 06:45:19 -- setup/common.sh@46 -- # (( part++ )) 00:04:58.750 06:45:19 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.750 06:45:19 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:58.750 06:45:19 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:58.750 06:45:19 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:59.318 Creating new GPT entries in memory. 00:04:59.318 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:59.318 other utilities. 00:04:59.318 06:45:20 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:59.319 06:45:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.319 06:45:20 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.319 06:45:20 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.319 06:45:20 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:00.260 Creating new GPT entries in memory. 00:05:00.260 The operation has completed successfully. 00:05:00.260 06:45:21 -- setup/common.sh@57 -- # (( part++ )) 00:05:00.260 06:45:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.260 06:45:21 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.260 06:45:21 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.260 06:45:21 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:01.198 The operation has completed successfully. 00:05:01.198 06:45:22 -- setup/common.sh@57 -- # (( part++ )) 00:05:01.198 06:45:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.198 06:45:22 -- setup/common.sh@62 -- # wait 1163916 00:05:01.458 06:45:22 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:01.458 06:45:22 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:01.458 06:45:22 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.458 06:45:22 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:01.458 06:45:22 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:01.458 06:45:22 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:01.458 06:45:22 -- setup/devices.sh@161 -- # break 00:05:01.458 06:45:22 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:01.458 06:45:22 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:01.458 06:45:22 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:01.458 06:45:22 -- setup/devices.sh@166 -- # dm=dm-2 00:05:01.458 06:45:22 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:01.458 06:45:22 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:01.458 06:45:22 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:01.458 06:45:22 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:05:01.458 06:45:22 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:01.458 06:45:22 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:01.458 06:45:22 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:01.458 06:45:22 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:01.458 06:45:22 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.458 06:45:23 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:01.458 06:45:23 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:01.458 06:45:23 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:01.458 06:45:23 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.458 06:45:23 -- setup/devices.sh@53 -- # local found=0 00:05:01.458 06:45:23 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:01.458 06:45:23 -- setup/devices.sh@56 -- # : 00:05:01.458 06:45:23 -- setup/devices.sh@59 -- # local pci status 00:05:01.458 06:45:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.458 06:45:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:01.458 06:45:23 -- setup/devices.sh@47 -- # setup output config 00:05:01.458 06:45:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.458 06:45:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:04.750 06:45:26 -- setup/devices.sh@63 -- # found=1 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.750 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.750 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.751 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.751 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.751 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.751 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.751 06:45:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.751 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.010 06:45:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.010 06:45:26 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:05.010 06:45:26 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:05.010 06:45:26 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:05.010 06:45:26 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.010 06:45:26 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:05.010 06:45:26 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:05.010 06:45:26 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:05.010 06:45:26 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:05.010 06:45:26 -- setup/devices.sh@50 -- # local mount_point= 00:05:05.010 06:45:26 -- setup/devices.sh@51 -- # local test_file= 00:05:05.010 06:45:26 -- setup/devices.sh@53 -- # local found=0 00:05:05.010 06:45:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:05.010 06:45:26 -- setup/devices.sh@59 -- # local pci status 00:05:05.010 06:45:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.010 06:45:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:05.010 06:45:26 -- setup/devices.sh@47 -- # setup output config 00:05:05.010 06:45:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.010 06:45:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:08.302 06:45:29 -- setup/devices.sh@63 -- # found=1 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.302 06:45:29 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.302 06:45:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.562 06:45:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.562 06:45:30 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:08.562 06:45:30 -- setup/devices.sh@68 -- # return 0 00:05:08.562 06:45:30 -- setup/devices.sh@187 -- # cleanup_dm 00:05:08.562 06:45:30 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:08.562 06:45:30 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.562 06:45:30 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:08.562 06:45:30 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.562 06:45:30 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:08.562 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.562 06:45:30 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.562 06:45:30 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:08.562 00:05:08.562 real 0m10.352s 00:05:08.562 user 0m2.563s 00:05:08.562 sys 0m4.914s 00:05:08.562 06:45:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.562 06:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.562 ************************************ 00:05:08.562 END TEST dm_mount 00:05:08.562 ************************************ 00:05:08.562 06:45:30 -- setup/devices.sh@1 -- # cleanup 00:05:08.562 06:45:30 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:08.562 06:45:30 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.562 06:45:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.562 06:45:30 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:08.562 06:45:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.562 06:45:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.820 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:08.820 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:08.820 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.820 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:08.820 06:45:30 -- setup/devices.sh@12 -- # cleanup_dm 00:05:08.820 06:45:30 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:08.820 06:45:30 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.820 06:45:30 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.820 06:45:30 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.820 06:45:30 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.820 06:45:30 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:08.820 00:05:08.820 real 0m28.156s 00:05:08.820 user 0m8.083s 00:05:08.820 sys 0m15.088s 00:05:08.820 06:45:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.820 06:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.820 ************************************ 00:05:08.820 END TEST devices 00:05:08.820 ************************************ 00:05:09.080 00:05:09.080 real 1m39.673s 00:05:09.080 user 0m30.996s 00:05:09.080 sys 0m56.685s 00:05:09.080 06:45:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.080 06:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:09.080 ************************************ 00:05:09.080 END TEST setup.sh 00:05:09.080 ************************************ 00:05:09.080 06:45:30 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:12.374 Hugepages 00:05:12.374 node hugesize free / total 00:05:12.374 node0 1048576kB 0 / 0 00:05:12.374 node0 2048kB 2048 / 2048 00:05:12.374 node1 1048576kB 0 / 0 00:05:12.374 node1 2048kB 0 / 0 00:05:12.374 00:05:12.374 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:12.374 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:12.374 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:12.374 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:12.374 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:12.374 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:12.374 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:12.374 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:12.374 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:12.374 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:12.374 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:12.374 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:12.374 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:12.374 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:12.374 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:12.374 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:12.374 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:12.634 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:12.634 06:45:34 -- spdk/autotest.sh@128 -- # uname -s 00:05:12.634 06:45:34 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:12.634 06:45:34 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:12.634 06:45:34 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:15.933 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:15.933 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:15.933 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:15.933 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:15.933 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:15.933 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:15.933 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.192 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:16.192 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.192 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.192 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.192 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.192 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.192 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.192 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.192 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:18.099 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:18.358 06:45:39 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:19.298 06:45:40 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:19.298 06:45:40 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:19.298 06:45:40 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:19.298 06:45:40 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:19.298 06:45:40 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:19.298 06:45:40 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:19.298 06:45:40 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:19.298 06:45:40 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:19.298 06:45:40 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:19.298 06:45:40 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:19.298 06:45:40 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:19.298 06:45:40 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:23.497 Waiting for block devices as requested 00:05:23.497 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:23.497 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:23.497 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:23.497 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:23.497 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:23.497 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:23.497 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:23.497 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:23.497 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:23.497 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:23.756 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:23.756 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:23.756 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:24.015 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:24.015 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:24.015 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:24.274 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:24.274 06:45:45 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:24.274 06:45:45 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:24.274 06:45:45 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:05:24.274 06:45:45 -- common/autotest_common.sh@1497 -- # grep 0000:d8:00.0/nvme/nvme 00:05:24.274 06:45:45 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:24.274 06:45:45 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:24.274 06:45:45 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:24.274 06:45:45 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:24.274 06:45:45 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:24.274 06:45:45 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:24.533 06:45:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:24.533 06:45:45 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:24.533 06:45:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:24.533 06:45:45 -- common/autotest_common.sh@1540 -- # oacs=' 0xe' 00:05:24.534 06:45:45 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:24.534 06:45:45 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:24.534 06:45:45 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:24.534 06:45:45 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:24.534 06:45:45 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:24.534 06:45:45 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:24.534 06:45:45 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:24.534 06:45:45 -- common/autotest_common.sh@1552 -- # continue 00:05:24.534 06:45:45 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:24.534 06:45:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.534 06:45:45 -- common/autotest_common.sh@10 -- # set +x 00:05:24.534 06:45:45 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:24.534 06:45:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.534 06:45:45 -- common/autotest_common.sh@10 -- # set +x 00:05:24.534 06:45:45 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:27.826 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:27.827 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:27.827 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:27.827 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:28.086 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:28.086 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:28.086 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:28.086 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:28.086 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:28.086 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:28.086 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:28.086 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:28.086 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:28.086 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:28.086 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:28.086 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:29.993 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:29.993 06:45:51 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:29.993 06:45:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.993 06:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:29.993 06:45:51 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:29.993 06:45:51 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:29.993 06:45:51 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:29.993 06:45:51 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:29.993 06:45:51 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:29.993 06:45:51 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:29.993 06:45:51 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:29.993 06:45:51 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:29.993 06:45:51 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.993 06:45:51 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:29.993 06:45:51 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:30.253 06:45:51 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:30.253 06:45:51 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:30.253 06:45:51 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:30.253 06:45:51 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:30.253 06:45:51 -- common/autotest_common.sh@1575 -- # device=0x0a54 00:05:30.253 06:45:51 -- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:30.253 06:45:51 -- common/autotest_common.sh@1577 -- # bdfs+=($bdf) 00:05:30.253 06:45:51 -- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:d8:00.0 00:05:30.253 06:45:51 -- common/autotest_common.sh@1587 -- # [[ -z 0000:d8:00.0 ]] 00:05:30.253 06:45:51 -- common/autotest_common.sh@1592 -- # spdk_tgt_pid=1173936 00:05:30.253 06:45:51 -- common/autotest_common.sh@1593 -- # waitforlisten 1173936 00:05:30.253 06:45:51 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.253 06:45:51 -- common/autotest_common.sh@829 -- # '[' -z 1173936 ']' 00:05:30.253 06:45:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.253 06:45:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.253 06:45:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.253 06:45:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.253 06:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:30.253 [2024-12-15 06:45:51.788628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:30.253 [2024-12-15 06:45:51.788684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173936 ] 00:05:30.253 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.253 [2024-12-15 06:45:51.873980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.512 [2024-12-15 06:45:51.912581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.512 [2024-12-15 06:45:51.912702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.081 06:45:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.081 06:45:52 -- common/autotest_common.sh@862 -- # return 0 00:05:31.081 06:45:52 -- common/autotest_common.sh@1595 -- # bdf_id=0 00:05:31.081 06:45:52 -- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}" 00:05:31.081 06:45:52 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:34.375 nvme0n1 00:05:34.375 06:45:55 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:34.375 [2024-12-15 06:45:55.775914] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:34.375 request: 00:05:34.375 { 00:05:34.375 "nvme_ctrlr_name": "nvme0", 00:05:34.375 "password": "test", 00:05:34.375 "method": "bdev_nvme_opal_revert", 00:05:34.375 "req_id": 1 00:05:34.375 } 00:05:34.375 Got JSON-RPC error response 00:05:34.375 response: 00:05:34.375 { 00:05:34.375 "code": -32602, 00:05:34.375 "message": "Invalid parameters" 00:05:34.375 } 00:05:34.375 06:45:55 -- common/autotest_common.sh@1599 -- # true 00:05:34.375 06:45:55 -- common/autotest_common.sh@1600 -- # (( ++bdf_id )) 00:05:34.375 06:45:55 -- common/autotest_common.sh@1603 -- # killprocess 1173936 00:05:34.375 06:45:55 -- common/autotest_common.sh@936 -- # '[' -z 1173936 ']' 00:05:34.375 06:45:55 -- common/autotest_common.sh@940 -- # kill -0 1173936 00:05:34.375 06:45:55 -- common/autotest_common.sh@941 -- # uname 00:05:34.375 06:45:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.375 06:45:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1173936 00:05:34.375 06:45:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.375 06:45:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.375 06:45:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1173936' 00:05:34.375 killing process with pid 1173936 00:05:34.375 06:45:55 -- common/autotest_common.sh@955 -- # kill 1173936 00:05:34.375 06:45:55 -- common/autotest_common.sh@960 -- # wait 1173936 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.375 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.376 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.377 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.009 06:45:58 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:37.009 06:45:58 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:37.009 06:45:58 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:37.009 06:45:58 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:37.009 06:45:58 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:37.009 06:45:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:37.009 06:45:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.009 06:45:58 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:37.009 06:45:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.009 06:45:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.009 06:45:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.009 ************************************ 00:05:37.009 START TEST env 00:05:37.009 ************************************ 00:05:37.009 06:45:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:37.009 * Looking for test storage... 00:05:37.009 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:37.009 06:45:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:37.009 06:45:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:37.009 06:45:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:37.009 06:45:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:37.009 06:45:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:37.009 06:45:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:37.009 06:45:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:37.009 06:45:58 -- scripts/common.sh@335 -- # IFS=.-: 00:05:37.009 06:45:58 -- scripts/common.sh@335 -- # read -ra ver1 00:05:37.009 06:45:58 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.009 06:45:58 -- scripts/common.sh@336 -- # read -ra ver2 00:05:37.009 06:45:58 -- scripts/common.sh@337 -- # local 'op=<' 00:05:37.009 06:45:58 -- scripts/common.sh@339 -- # ver1_l=2 00:05:37.009 06:45:58 -- scripts/common.sh@340 -- # ver2_l=1 00:05:37.009 06:45:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:37.009 06:45:58 -- scripts/common.sh@343 -- # case "$op" in 00:05:37.009 06:45:58 -- scripts/common.sh@344 -- # : 1 00:05:37.009 06:45:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:37.009 06:45:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.009 06:45:58 -- scripts/common.sh@364 -- # decimal 1 00:05:37.009 06:45:58 -- scripts/common.sh@352 -- # local d=1 00:05:37.009 06:45:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.009 06:45:58 -- scripts/common.sh@354 -- # echo 1 00:05:37.009 06:45:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:37.009 06:45:58 -- scripts/common.sh@365 -- # decimal 2 00:05:37.009 06:45:58 -- scripts/common.sh@352 -- # local d=2 00:05:37.009 06:45:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.009 06:45:58 -- scripts/common.sh@354 -- # echo 2 00:05:37.009 06:45:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:37.009 06:45:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:37.009 06:45:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:37.009 06:45:58 -- scripts/common.sh@367 -- # return 0 00:05:37.009 06:45:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.009 06:45:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:37.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.009 --rc genhtml_branch_coverage=1 00:05:37.009 --rc genhtml_function_coverage=1 00:05:37.009 --rc genhtml_legend=1 00:05:37.009 --rc geninfo_all_blocks=1 00:05:37.009 --rc geninfo_unexecuted_blocks=1 00:05:37.009 00:05:37.009 ' 00:05:37.009 06:45:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:37.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.009 --rc genhtml_branch_coverage=1 00:05:37.009 --rc genhtml_function_coverage=1 00:05:37.009 --rc genhtml_legend=1 00:05:37.009 --rc geninfo_all_blocks=1 00:05:37.009 --rc geninfo_unexecuted_blocks=1 00:05:37.009 00:05:37.009 ' 00:05:37.009 06:45:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:37.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.009 --rc genhtml_branch_coverage=1 00:05:37.009 --rc genhtml_function_coverage=1 00:05:37.009 --rc genhtml_legend=1 00:05:37.009 --rc geninfo_all_blocks=1 00:05:37.009 --rc geninfo_unexecuted_blocks=1 00:05:37.009 00:05:37.009 ' 00:05:37.009 06:45:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:37.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.009 --rc genhtml_branch_coverage=1 00:05:37.009 --rc genhtml_function_coverage=1 00:05:37.009 --rc genhtml_legend=1 00:05:37.009 --rc geninfo_all_blocks=1 00:05:37.009 --rc geninfo_unexecuted_blocks=1 00:05:37.009 00:05:37.009 ' 00:05:37.010 06:45:58 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.010 06:45:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.010 06:45:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.010 06:45:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.010 ************************************ 00:05:37.010 START TEST env_memory 00:05:37.010 ************************************ 00:05:37.010 06:45:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.010 00:05:37.010 00:05:37.010 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.010 http://cunit.sourceforge.net/ 00:05:37.010 00:05:37.010 00:05:37.010 Suite: memory 00:05:37.269 Test: alloc and free memory map ...[2024-12-15 06:45:58.675870] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.269 passed 00:05:37.269 Test: mem map translation ...[2024-12-15 06:45:58.694385] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.269 [2024-12-15 06:45:58.694400] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.269 [2024-12-15 06:45:58.694436] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.270 [2024-12-15 06:45:58.694444] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.270 passed 00:05:37.270 Test: mem map registration ...[2024-12-15 06:45:58.729408] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:37.270 [2024-12-15 06:45:58.729423] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:37.270 passed 00:05:37.270 Test: mem map adjacent registrations ...passed 00:05:37.270 00:05:37.270 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.270 suites 1 1 n/a 0 0 00:05:37.270 tests 4 4 4 0 0 00:05:37.270 asserts 152 152 152 0 n/a 00:05:37.270 00:05:37.270 Elapsed time = 0.130 seconds 00:05:37.270 00:05:37.270 real 0m0.143s 00:05:37.270 user 0m0.132s 00:05:37.270 sys 0m0.011s 00:05:37.270 06:45:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.270 06:45:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.270 ************************************ 00:05:37.270 END TEST env_memory 00:05:37.270 ************************************ 00:05:37.270 06:45:58 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.270 06:45:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.270 06:45:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.270 06:45:58 -- common/autotest_common.sh@10 -- # set +x 00:05:37.270 ************************************ 00:05:37.270 START TEST env_vtophys 00:05:37.270 ************************************ 00:05:37.270 06:45:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.270 EAL: lib.eal log level changed from notice to debug 00:05:37.270 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.270 EAL: Detected lcore 1 as core 1 on socket 0 00:05:37.270 EAL: Detected lcore 2 as core 2 on socket 0 00:05:37.270 EAL: Detected lcore 3 as core 3 on socket 0 00:05:37.270 EAL: Detected lcore 4 as core 4 on socket 0 00:05:37.270 EAL: Detected lcore 5 as core 5 on socket 0 00:05:37.270 EAL: Detected lcore 6 as core 6 on socket 0 00:05:37.270 EAL: Detected lcore 7 as core 8 on socket 0 00:05:37.270 EAL: Detected lcore 8 as core 9 on socket 0 00:05:37.270 EAL: Detected lcore 9 as core 10 on socket 0 00:05:37.270 EAL: Detected lcore 10 as core 11 on socket 0 00:05:37.270 EAL: Detected lcore 11 as core 12 on socket 0 00:05:37.270 EAL: Detected lcore 12 as core 13 on socket 0 00:05:37.270 EAL: Detected lcore 13 as core 14 on socket 0 00:05:37.270 EAL: Detected lcore 14 as core 16 on socket 0 00:05:37.270 EAL: Detected lcore 15 as core 17 on socket 0 00:05:37.270 EAL: Detected lcore 16 as core 18 on socket 0 00:05:37.270 EAL: Detected lcore 17 as core 19 on socket 0 00:05:37.270 EAL: Detected lcore 18 as core 20 on socket 0 00:05:37.270 EAL: Detected lcore 19 as core 21 on socket 0 00:05:37.270 EAL: Detected lcore 20 as core 22 on socket 0 00:05:37.270 EAL: Detected lcore 21 as core 24 on socket 0 00:05:37.270 EAL: Detected lcore 22 as core 25 on socket 0 00:05:37.270 EAL: Detected lcore 23 as core 26 on socket 0 00:05:37.270 EAL: Detected lcore 24 as core 27 on socket 0 00:05:37.270 EAL: Detected lcore 25 as core 28 on socket 0 00:05:37.270 EAL: Detected lcore 26 as core 29 on socket 0 00:05:37.270 EAL: Detected lcore 27 as core 30 on socket 0 00:05:37.270 EAL: Detected lcore 28 as core 0 on socket 1 00:05:37.270 EAL: Detected lcore 29 as core 1 on socket 1 00:05:37.270 EAL: Detected lcore 30 as core 2 on socket 1 00:05:37.270 EAL: Detected lcore 31 as core 3 on socket 1 00:05:37.270 EAL: Detected lcore 32 as core 4 on socket 1 00:05:37.270 EAL: Detected lcore 33 as core 5 on socket 1 00:05:37.270 EAL: Detected lcore 34 as core 6 on socket 1 00:05:37.270 EAL: Detected lcore 35 as core 8 on socket 1 00:05:37.270 EAL: Detected lcore 36 as core 9 on socket 1 00:05:37.270 EAL: Detected lcore 37 as core 10 on socket 1 00:05:37.270 EAL: Detected lcore 38 as core 11 on socket 1 00:05:37.270 EAL: Detected lcore 39 as core 12 on socket 1 00:05:37.270 EAL: Detected lcore 40 as core 13 on socket 1 00:05:37.270 EAL: Detected lcore 41 as core 14 on socket 1 00:05:37.270 EAL: Detected lcore 42 as core 16 on socket 1 00:05:37.270 EAL: Detected lcore 43 as core 17 on socket 1 00:05:37.270 EAL: Detected lcore 44 as core 18 on socket 1 00:05:37.270 EAL: Detected lcore 45 as core 19 on socket 1 00:05:37.270 EAL: Detected lcore 46 as core 20 on socket 1 00:05:37.270 EAL: Detected lcore 47 as core 21 on socket 1 00:05:37.270 EAL: Detected lcore 48 as core 22 on socket 1 00:05:37.270 EAL: Detected lcore 49 as core 24 on socket 1 00:05:37.270 EAL: Detected lcore 50 as core 25 on socket 1 00:05:37.270 EAL: Detected lcore 51 as core 26 on socket 1 00:05:37.270 EAL: Detected lcore 52 as core 27 on socket 1 00:05:37.270 EAL: Detected lcore 53 as core 28 on socket 1 00:05:37.270 EAL: Detected lcore 54 as core 29 on socket 1 00:05:37.270 EAL: Detected lcore 55 as core 30 on socket 1 00:05:37.270 EAL: Detected lcore 56 as core 0 on socket 0 00:05:37.270 EAL: Detected lcore 57 as core 1 on socket 0 00:05:37.270 EAL: Detected lcore 58 as core 2 on socket 0 00:05:37.270 EAL: Detected lcore 59 as core 3 on socket 0 00:05:37.270 EAL: Detected lcore 60 as core 4 on socket 0 00:05:37.270 EAL: Detected lcore 61 as core 5 on socket 0 00:05:37.270 EAL: Detected lcore 62 as core 6 on socket 0 00:05:37.270 EAL: Detected lcore 63 as core 8 on socket 0 00:05:37.270 EAL: Detected lcore 64 as core 9 on socket 0 00:05:37.270 EAL: Detected lcore 65 as core 10 on socket 0 00:05:37.270 EAL: Detected lcore 66 as core 11 on socket 0 00:05:37.270 EAL: Detected lcore 67 as core 12 on socket 0 00:05:37.270 EAL: Detected lcore 68 as core 13 on socket 0 00:05:37.270 EAL: Detected lcore 69 as core 14 on socket 0 00:05:37.270 EAL: Detected lcore 70 as core 16 on socket 0 00:05:37.270 EAL: Detected lcore 71 as core 17 on socket 0 00:05:37.270 EAL: Detected lcore 72 as core 18 on socket 0 00:05:37.270 EAL: Detected lcore 73 as core 19 on socket 0 00:05:37.270 EAL: Detected lcore 74 as core 20 on socket 0 00:05:37.270 EAL: Detected lcore 75 as core 21 on socket 0 00:05:37.270 EAL: Detected lcore 76 as core 22 on socket 0 00:05:37.270 EAL: Detected lcore 77 as core 24 on socket 0 00:05:37.270 EAL: Detected lcore 78 as core 25 on socket 0 00:05:37.270 EAL: Detected lcore 79 as core 26 on socket 0 00:05:37.270 EAL: Detected lcore 80 as core 27 on socket 0 00:05:37.270 EAL: Detected lcore 81 as core 28 on socket 0 00:05:37.270 EAL: Detected lcore 82 as core 29 on socket 0 00:05:37.270 EAL: Detected lcore 83 as core 30 on socket 0 00:05:37.270 EAL: Detected lcore 84 as core 0 on socket 1 00:05:37.270 EAL: Detected lcore 85 as core 1 on socket 1 00:05:37.270 EAL: Detected lcore 86 as core 2 on socket 1 00:05:37.270 EAL: Detected lcore 87 as core 3 on socket 1 00:05:37.270 EAL: Detected lcore 88 as core 4 on socket 1 00:05:37.270 EAL: Detected lcore 89 as core 5 on socket 1 00:05:37.270 EAL: Detected lcore 90 as core 6 on socket 1 00:05:37.270 EAL: Detected lcore 91 as core 8 on socket 1 00:05:37.270 EAL: Detected lcore 92 as core 9 on socket 1 00:05:37.270 EAL: Detected lcore 93 as core 10 on socket 1 00:05:37.270 EAL: Detected lcore 94 as core 11 on socket 1 00:05:37.270 EAL: Detected lcore 95 as core 12 on socket 1 00:05:37.270 EAL: Detected lcore 96 as core 13 on socket 1 00:05:37.270 EAL: Detected lcore 97 as core 14 on socket 1 00:05:37.270 EAL: Detected lcore 98 as core 16 on socket 1 00:05:37.270 EAL: Detected lcore 99 as core 17 on socket 1 00:05:37.270 EAL: Detected lcore 100 as core 18 on socket 1 00:05:37.270 EAL: Detected lcore 101 as core 19 on socket 1 00:05:37.270 EAL: Detected lcore 102 as core 20 on socket 1 00:05:37.270 EAL: Detected lcore 103 as core 21 on socket 1 00:05:37.270 EAL: Detected lcore 104 as core 22 on socket 1 00:05:37.270 EAL: Detected lcore 105 as core 24 on socket 1 00:05:37.270 EAL: Detected lcore 106 as core 25 on socket 1 00:05:37.270 EAL: Detected lcore 107 as core 26 on socket 1 00:05:37.270 EAL: Detected lcore 108 as core 27 on socket 1 00:05:37.270 EAL: Detected lcore 109 as core 28 on socket 1 00:05:37.270 EAL: Detected lcore 110 as core 29 on socket 1 00:05:37.270 EAL: Detected lcore 111 as core 30 on socket 1 00:05:37.270 EAL: Maximum logical cores by configuration: 128 00:05:37.270 EAL: Detected CPU lcores: 112 00:05:37.270 EAL: Detected NUMA nodes: 2 00:05:37.270 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:37.270 EAL: Detected shared linkage of DPDK 00:05:37.270 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:37.270 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:37.270 EAL: Registered [vdev] bus. 00:05:37.270 EAL: bus.vdev log level changed from disabled to notice 00:05:37.270 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:37.270 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:37.270 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:37.270 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:37.270 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:37.270 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:37.270 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:37.270 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:37.270 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.270 EAL: No shared files mode enabled, IPC is disabled 00:05:37.270 EAL: Bus pci wants IOVA as 'DC' 00:05:37.270 EAL: Bus vdev wants IOVA as 'DC' 00:05:37.270 EAL: Buses did not request a specific IOVA mode. 00:05:37.270 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:37.270 EAL: Selected IOVA mode 'VA' 00:05:37.270 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.270 EAL: Probing VFIO support... 00:05:37.270 EAL: IOMMU type 1 (Type 1) is supported 00:05:37.270 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:37.270 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:37.270 EAL: VFIO support initialized 00:05:37.270 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.270 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.270 EAL: Setting up physically contiguous memory... 00:05:37.270 EAL: Setting maximum number of open files to 524288 00:05:37.270 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.270 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:37.271 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.271 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.271 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.271 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.271 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.271 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.271 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.271 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.271 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.271 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.271 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.271 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.271 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.271 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.271 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.271 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.271 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.271 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.271 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.271 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.271 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.271 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.271 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.271 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.271 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.271 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:37.271 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.271 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:37.271 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.271 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.271 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:37.271 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:37.271 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.271 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:37.271 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.271 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.271 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:37.271 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:37.271 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.271 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:37.271 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.271 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.271 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:37.271 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:37.271 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.271 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:37.271 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.271 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.271 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:37.271 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:37.271 EAL: Hugepages will be freed exactly as allocated. 00:05:37.271 EAL: No shared files mode enabled, IPC is disabled 00:05:37.271 EAL: No shared files mode enabled, IPC is disabled 00:05:37.271 EAL: TSC frequency is ~2500000 KHz 00:05:37.271 EAL: Main lcore 0 is ready (tid=7f1257966a00;cpuset=[0]) 00:05:37.271 EAL: Trying to obtain current memory policy. 00:05:37.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.271 EAL: Restoring previous memory policy: 0 00:05:37.271 EAL: request: mp_malloc_sync 00:05:37.271 EAL: No shared files mode enabled, IPC is disabled 00:05:37.271 EAL: Heap on socket 0 was expanded by 2MB 00:05:37.271 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:05:37.271 EAL: probe driver: 8086:37d2 net_i40e 00:05:37.271 EAL: Not managed by a supported kernel driver, skipped 00:05:37.271 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:05:37.271 EAL: probe driver: 8086:37d2 net_i40e 00:05:37.271 EAL: Not managed by a supported kernel driver, skipped 00:05:37.271 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:37.531 EAL: Mem event callback 'spdk:(nil)' registered 00:05:37.531 00:05:37.531 00:05:37.531 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.531 http://cunit.sourceforge.net/ 00:05:37.531 00:05:37.531 00:05:37.531 Suite: components_suite 00:05:37.531 Test: vtophys_malloc_test ...passed 00:05:37.531 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.531 EAL: Restoring previous memory policy: 4 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.531 EAL: Trying to obtain current memory policy. 00:05:37.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.531 EAL: Restoring previous memory policy: 4 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.531 EAL: Trying to obtain current memory policy. 00:05:37.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.531 EAL: Restoring previous memory policy: 4 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.531 EAL: Trying to obtain current memory policy. 00:05:37.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.531 EAL: Restoring previous memory policy: 4 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.531 EAL: Trying to obtain current memory policy. 00:05:37.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.531 EAL: Restoring previous memory policy: 4 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.531 EAL: Trying to obtain current memory policy. 00:05:37.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.531 EAL: Restoring previous memory policy: 4 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.531 EAL: Trying to obtain current memory policy. 00:05:37.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.531 EAL: Restoring previous memory policy: 4 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was expanded by 130MB 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was shrunk by 130MB 00:05:37.531 EAL: Trying to obtain current memory policy. 00:05:37.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.531 EAL: Restoring previous memory policy: 4 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was expanded by 258MB 00:05:37.531 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.531 EAL: request: mp_malloc_sync 00:05:37.531 EAL: No shared files mode enabled, IPC is disabled 00:05:37.531 EAL: Heap on socket 0 was shrunk by 258MB 00:05:37.531 EAL: Trying to obtain current memory policy. 00:05:37.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.791 EAL: Restoring previous memory policy: 4 00:05:37.791 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.791 EAL: request: mp_malloc_sync 00:05:37.791 EAL: No shared files mode enabled, IPC is disabled 00:05:37.791 EAL: Heap on socket 0 was expanded by 514MB 00:05:37.791 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.791 EAL: request: mp_malloc_sync 00:05:37.791 EAL: No shared files mode enabled, IPC is disabled 00:05:37.791 EAL: Heap on socket 0 was shrunk by 514MB 00:05:37.791 EAL: Trying to obtain current memory policy. 00:05:37.791 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.051 EAL: Restoring previous memory policy: 4 00:05:38.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.051 EAL: request: mp_malloc_sync 00:05:38.051 EAL: No shared files mode enabled, IPC is disabled 00:05:38.051 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.310 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.310 EAL: request: mp_malloc_sync 00:05:38.310 EAL: No shared files mode enabled, IPC is disabled 00:05:38.310 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.310 passed 00:05:38.310 00:05:38.310 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.310 suites 1 1 n/a 0 0 00:05:38.310 tests 2 2 2 0 0 00:05:38.310 asserts 497 497 497 0 n/a 00:05:38.310 00:05:38.310 Elapsed time = 0.968 seconds 00:05:38.310 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.310 EAL: request: mp_malloc_sync 00:05:38.310 EAL: No shared files mode enabled, IPC is disabled 00:05:38.310 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.310 EAL: No shared files mode enabled, IPC is disabled 00:05:38.310 EAL: No shared files mode enabled, IPC is disabled 00:05:38.310 EAL: No shared files mode enabled, IPC is disabled 00:05:38.310 00:05:38.310 real 0m1.115s 00:05:38.310 user 0m0.648s 00:05:38.310 sys 0m0.435s 00:05:38.310 06:45:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.310 06:45:59 -- common/autotest_common.sh@10 -- # set +x 00:05:38.310 ************************************ 00:05:38.310 END TEST env_vtophys 00:05:38.310 ************************************ 00:05:38.569 06:45:59 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.569 06:45:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.569 06:45:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.569 06:45:59 -- common/autotest_common.sh@10 -- # set +x 00:05:38.569 ************************************ 00:05:38.569 START TEST env_pci 00:05:38.569 ************************************ 00:05:38.569 06:45:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.569 00:05:38.569 00:05:38.569 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.569 http://cunit.sourceforge.net/ 00:05:38.569 00:05:38.569 00:05:38.569 Suite: pci 00:05:38.569 Test: pci_hook ...[2024-12-15 06:46:00.005682] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1175504 has claimed it 00:05:38.569 EAL: Cannot find device (10000:00:01.0) 00:05:38.569 EAL: Failed to attach device on primary process 00:05:38.569 passed 00:05:38.569 00:05:38.569 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.569 suites 1 1 n/a 0 0 00:05:38.569 tests 1 1 1 0 0 00:05:38.569 asserts 25 25 25 0 n/a 00:05:38.569 00:05:38.569 Elapsed time = 0.035 seconds 00:05:38.569 00:05:38.569 real 0m0.056s 00:05:38.569 user 0m0.014s 00:05:38.569 sys 0m0.042s 00:05:38.569 06:46:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.569 06:46:00 -- common/autotest_common.sh@10 -- # set +x 00:05:38.569 ************************************ 00:05:38.569 END TEST env_pci 00:05:38.569 ************************************ 00:05:38.569 06:46:00 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:38.569 06:46:00 -- env/env.sh@15 -- # uname 00:05:38.569 06:46:00 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:38.569 06:46:00 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:38.569 06:46:00 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.569 06:46:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:38.569 06:46:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.569 06:46:00 -- common/autotest_common.sh@10 -- # set +x 00:05:38.569 ************************************ 00:05:38.569 START TEST env_dpdk_post_init 00:05:38.569 ************************************ 00:05:38.569 06:46:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.569 EAL: Detected CPU lcores: 112 00:05:38.569 EAL: Detected NUMA nodes: 2 00:05:38.569 EAL: Detected shared linkage of DPDK 00:05:38.569 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.569 EAL: Selected IOVA mode 'VA' 00:05:38.570 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.570 EAL: VFIO support initialized 00:05:38.570 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.829 EAL: Using IOMMU type 1 (Type 1) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:38.829 EAL: Ignore mapping IO port bar(1) 00:05:38.829 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:39.768 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:43.961 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:43.961 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:43.961 Starting DPDK initialization... 00:05:43.961 Starting SPDK post initialization... 00:05:43.961 SPDK NVMe probe 00:05:43.961 Attaching to 0000:d8:00.0 00:05:43.961 Attached to 0000:d8:00.0 00:05:43.961 Cleaning up... 00:05:43.961 00:05:43.961 real 0m5.365s 00:05:43.961 user 0m4.005s 00:05:43.961 sys 0m0.415s 00:05:43.961 06:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.961 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:05:43.961 ************************************ 00:05:43.961 END TEST env_dpdk_post_init 00:05:43.961 ************************************ 00:05:43.961 06:46:05 -- env/env.sh@26 -- # uname 00:05:43.961 06:46:05 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:43.961 06:46:05 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.961 06:46:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.961 06:46:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.961 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:05:43.961 ************************************ 00:05:43.961 START TEST env_mem_callbacks 00:05:43.961 ************************************ 00:05:43.961 06:46:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.961 EAL: Detected CPU lcores: 112 00:05:43.961 EAL: Detected NUMA nodes: 2 00:05:43.961 EAL: Detected shared linkage of DPDK 00:05:43.961 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.961 EAL: Selected IOVA mode 'VA' 00:05:43.961 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.961 EAL: VFIO support initialized 00:05:43.961 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.961 00:05:43.961 00:05:43.962 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.962 http://cunit.sourceforge.net/ 00:05:43.962 00:05:43.962 00:05:43.962 Suite: memory 00:05:43.962 Test: test ... 00:05:43.962 register 0x200000200000 2097152 00:05:43.962 malloc 3145728 00:05:43.962 register 0x200000400000 4194304 00:05:43.962 buf 0x200000500000 len 3145728 PASSED 00:05:43.962 malloc 64 00:05:43.962 buf 0x2000004fff40 len 64 PASSED 00:05:43.962 malloc 4194304 00:05:43.962 register 0x200000800000 6291456 00:05:43.962 buf 0x200000a00000 len 4194304 PASSED 00:05:43.962 free 0x200000500000 3145728 00:05:43.962 free 0x2000004fff40 64 00:05:43.962 unregister 0x200000400000 4194304 PASSED 00:05:43.962 free 0x200000a00000 4194304 00:05:43.962 unregister 0x200000800000 6291456 PASSED 00:05:43.962 malloc 8388608 00:05:43.962 register 0x200000400000 10485760 00:05:43.962 buf 0x200000600000 len 8388608 PASSED 00:05:43.962 free 0x200000600000 8388608 00:05:43.962 unregister 0x200000400000 10485760 PASSED 00:05:43.962 passed 00:05:43.962 00:05:43.962 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.962 suites 1 1 n/a 0 0 00:05:43.962 tests 1 1 1 0 0 00:05:43.962 asserts 15 15 15 0 n/a 00:05:43.962 00:05:43.962 Elapsed time = 0.008 seconds 00:05:43.962 00:05:43.962 real 0m0.069s 00:05:43.962 user 0m0.019s 00:05:43.962 sys 0m0.050s 00:05:43.962 06:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.962 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:05:43.962 ************************************ 00:05:43.962 END TEST env_mem_callbacks 00:05:43.962 ************************************ 00:05:44.220 00:05:44.220 real 0m7.197s 00:05:44.220 user 0m5.010s 00:05:44.220 sys 0m1.271s 00:05:44.220 06:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.220 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:05:44.220 ************************************ 00:05:44.220 END TEST env 00:05:44.220 ************************************ 00:05:44.220 06:46:05 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.220 06:46:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.220 06:46:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.220 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:05:44.220 ************************************ 00:05:44.220 START TEST rpc 00:05:44.220 ************************************ 00:05:44.220 06:46:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.220 * Looking for test storage... 00:05:44.220 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:44.220 06:46:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:44.220 06:46:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:44.220 06:46:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:44.220 06:46:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:44.220 06:46:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:44.220 06:46:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:44.220 06:46:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:44.220 06:46:05 -- scripts/common.sh@335 -- # IFS=.-: 00:05:44.220 06:46:05 -- scripts/common.sh@335 -- # read -ra ver1 00:05:44.220 06:46:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.220 06:46:05 -- scripts/common.sh@336 -- # read -ra ver2 00:05:44.220 06:46:05 -- scripts/common.sh@337 -- # local 'op=<' 00:05:44.220 06:46:05 -- scripts/common.sh@339 -- # ver1_l=2 00:05:44.220 06:46:05 -- scripts/common.sh@340 -- # ver2_l=1 00:05:44.220 06:46:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:44.220 06:46:05 -- scripts/common.sh@343 -- # case "$op" in 00:05:44.220 06:46:05 -- scripts/common.sh@344 -- # : 1 00:05:44.220 06:46:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:44.220 06:46:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.220 06:46:05 -- scripts/common.sh@364 -- # decimal 1 00:05:44.220 06:46:05 -- scripts/common.sh@352 -- # local d=1 00:05:44.220 06:46:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.479 06:46:05 -- scripts/common.sh@354 -- # echo 1 00:05:44.479 06:46:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:44.479 06:46:05 -- scripts/common.sh@365 -- # decimal 2 00:05:44.479 06:46:05 -- scripts/common.sh@352 -- # local d=2 00:05:44.479 06:46:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.479 06:46:05 -- scripts/common.sh@354 -- # echo 2 00:05:44.479 06:46:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:44.479 06:46:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:44.479 06:46:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:44.479 06:46:05 -- scripts/common.sh@367 -- # return 0 00:05:44.479 06:46:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.479 06:46:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:44.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.479 --rc genhtml_branch_coverage=1 00:05:44.479 --rc genhtml_function_coverage=1 00:05:44.479 --rc genhtml_legend=1 00:05:44.479 --rc geninfo_all_blocks=1 00:05:44.479 --rc geninfo_unexecuted_blocks=1 00:05:44.479 00:05:44.479 ' 00:05:44.479 06:46:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:44.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.479 --rc genhtml_branch_coverage=1 00:05:44.479 --rc genhtml_function_coverage=1 00:05:44.479 --rc genhtml_legend=1 00:05:44.479 --rc geninfo_all_blocks=1 00:05:44.479 --rc geninfo_unexecuted_blocks=1 00:05:44.479 00:05:44.479 ' 00:05:44.479 06:46:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:44.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.479 --rc genhtml_branch_coverage=1 00:05:44.479 --rc genhtml_function_coverage=1 00:05:44.479 --rc genhtml_legend=1 00:05:44.479 --rc geninfo_all_blocks=1 00:05:44.479 --rc geninfo_unexecuted_blocks=1 00:05:44.479 00:05:44.479 ' 00:05:44.479 06:46:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:44.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.479 --rc genhtml_branch_coverage=1 00:05:44.479 --rc genhtml_function_coverage=1 00:05:44.479 --rc genhtml_legend=1 00:05:44.479 --rc geninfo_all_blocks=1 00:05:44.479 --rc geninfo_unexecuted_blocks=1 00:05:44.479 00:05:44.479 ' 00:05:44.479 06:46:05 -- rpc/rpc.sh@65 -- # spdk_pid=1176646 00:05:44.479 06:46:05 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.479 06:46:05 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:44.479 06:46:05 -- rpc/rpc.sh@67 -- # waitforlisten 1176646 00:05:44.479 06:46:05 -- common/autotest_common.sh@829 -- # '[' -z 1176646 ']' 00:05:44.479 06:46:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.479 06:46:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.479 06:46:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.479 06:46:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.479 06:46:05 -- common/autotest_common.sh@10 -- # set +x 00:05:44.479 [2024-12-15 06:46:05.921802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:44.479 [2024-12-15 06:46:05.921861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1176646 ] 00:05:44.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.479 [2024-12-15 06:46:06.006225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.479 [2024-12-15 06:46:06.044756] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.479 [2024-12-15 06:46:06.044864] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.479 [2024-12-15 06:46:06.044875] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1176646' to capture a snapshot of events at runtime. 00:05:44.479 [2024-12-15 06:46:06.044884] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1176646 for offline analysis/debug. 00:05:44.479 [2024-12-15 06:46:06.044905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.418 06:46:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.418 06:46:06 -- common/autotest_common.sh@862 -- # return 0 00:05:45.418 06:46:06 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:45.418 06:46:06 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:45.418 06:46:06 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.418 06:46:06 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.418 06:46:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.418 06:46:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.418 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.418 ************************************ 00:05:45.418 START TEST rpc_integrity 00:05:45.418 ************************************ 00:05:45.418 06:46:06 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:45.418 06:46:06 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.418 06:46:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.418 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.418 06:46:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.418 06:46:06 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.418 06:46:06 -- rpc/rpc.sh@13 -- # jq length 00:05:45.418 06:46:06 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.418 06:46:06 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.418 06:46:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.418 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.418 06:46:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.418 06:46:06 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.418 06:46:06 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.418 06:46:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.418 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.418 06:46:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.418 06:46:06 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.418 { 00:05:45.418 "name": "Malloc0", 00:05:45.418 "aliases": [ 00:05:45.418 "3d2ddd4f-2efe-4d1a-8da8-bcc3436ea050" 00:05:45.418 ], 00:05:45.418 "product_name": "Malloc disk", 00:05:45.418 "block_size": 512, 00:05:45.418 "num_blocks": 16384, 00:05:45.418 "uuid": "3d2ddd4f-2efe-4d1a-8da8-bcc3436ea050", 00:05:45.418 "assigned_rate_limits": { 00:05:45.418 "rw_ios_per_sec": 0, 00:05:45.418 "rw_mbytes_per_sec": 0, 00:05:45.418 "r_mbytes_per_sec": 0, 00:05:45.418 "w_mbytes_per_sec": 0 00:05:45.418 }, 00:05:45.418 "claimed": false, 00:05:45.418 "zoned": false, 00:05:45.418 "supported_io_types": { 00:05:45.418 "read": true, 00:05:45.418 "write": true, 00:05:45.418 "unmap": true, 00:05:45.418 "write_zeroes": true, 00:05:45.418 "flush": true, 00:05:45.418 "reset": true, 00:05:45.418 "compare": false, 00:05:45.418 "compare_and_write": false, 00:05:45.418 "abort": true, 00:05:45.418 "nvme_admin": false, 00:05:45.418 "nvme_io": false 00:05:45.418 }, 00:05:45.418 "memory_domains": [ 00:05:45.418 { 00:05:45.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.418 "dma_device_type": 2 00:05:45.418 } 00:05:45.418 ], 00:05:45.418 "driver_specific": {} 00:05:45.418 } 00:05:45.418 ]' 00:05:45.418 06:46:06 -- rpc/rpc.sh@17 -- # jq length 00:05:45.418 06:46:06 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.418 06:46:06 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.418 06:46:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.418 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.418 [2024-12-15 06:46:06.859750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.418 [2024-12-15 06:46:06.859780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.418 [2024-12-15 06:46:06.859793] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11a1280 00:05:45.418 [2024-12-15 06:46:06.859802] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.418 [2024-12-15 06:46:06.860789] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.418 [2024-12-15 06:46:06.860811] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.418 Passthru0 00:05:45.418 06:46:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.418 06:46:06 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.418 06:46:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.418 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.418 06:46:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.418 06:46:06 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.418 { 00:05:45.418 "name": "Malloc0", 00:05:45.418 "aliases": [ 00:05:45.418 "3d2ddd4f-2efe-4d1a-8da8-bcc3436ea050" 00:05:45.418 ], 00:05:45.418 "product_name": "Malloc disk", 00:05:45.418 "block_size": 512, 00:05:45.418 "num_blocks": 16384, 00:05:45.418 "uuid": "3d2ddd4f-2efe-4d1a-8da8-bcc3436ea050", 00:05:45.418 "assigned_rate_limits": { 00:05:45.418 "rw_ios_per_sec": 0, 00:05:45.418 "rw_mbytes_per_sec": 0, 00:05:45.418 "r_mbytes_per_sec": 0, 00:05:45.418 "w_mbytes_per_sec": 0 00:05:45.418 }, 00:05:45.418 "claimed": true, 00:05:45.418 "claim_type": "exclusive_write", 00:05:45.418 "zoned": false, 00:05:45.418 "supported_io_types": { 00:05:45.418 "read": true, 00:05:45.418 "write": true, 00:05:45.418 "unmap": true, 00:05:45.418 "write_zeroes": true, 00:05:45.418 "flush": true, 00:05:45.418 "reset": true, 00:05:45.418 "compare": false, 00:05:45.418 "compare_and_write": false, 00:05:45.418 "abort": true, 00:05:45.418 "nvme_admin": false, 00:05:45.418 "nvme_io": false 00:05:45.418 }, 00:05:45.418 "memory_domains": [ 00:05:45.418 { 00:05:45.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.418 "dma_device_type": 2 00:05:45.418 } 00:05:45.418 ], 00:05:45.418 "driver_specific": {} 00:05:45.418 }, 00:05:45.418 { 00:05:45.418 "name": "Passthru0", 00:05:45.418 "aliases": [ 00:05:45.419 "ad8fab42-ee39-5401-a8f1-2b2b4ca32ab5" 00:05:45.419 ], 00:05:45.419 "product_name": "passthru", 00:05:45.419 "block_size": 512, 00:05:45.419 "num_blocks": 16384, 00:05:45.419 "uuid": "ad8fab42-ee39-5401-a8f1-2b2b4ca32ab5", 00:05:45.419 "assigned_rate_limits": { 00:05:45.419 "rw_ios_per_sec": 0, 00:05:45.419 "rw_mbytes_per_sec": 0, 00:05:45.419 "r_mbytes_per_sec": 0, 00:05:45.419 "w_mbytes_per_sec": 0 00:05:45.419 }, 00:05:45.419 "claimed": false, 00:05:45.419 "zoned": false, 00:05:45.419 "supported_io_types": { 00:05:45.419 "read": true, 00:05:45.419 "write": true, 00:05:45.419 "unmap": true, 00:05:45.419 "write_zeroes": true, 00:05:45.419 "flush": true, 00:05:45.419 "reset": true, 00:05:45.419 "compare": false, 00:05:45.419 "compare_and_write": false, 00:05:45.419 "abort": true, 00:05:45.419 "nvme_admin": false, 00:05:45.419 "nvme_io": false 00:05:45.419 }, 00:05:45.419 "memory_domains": [ 00:05:45.419 { 00:05:45.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.419 "dma_device_type": 2 00:05:45.419 } 00:05:45.419 ], 00:05:45.419 "driver_specific": { 00:05:45.419 "passthru": { 00:05:45.419 "name": "Passthru0", 00:05:45.419 "base_bdev_name": "Malloc0" 00:05:45.419 } 00:05:45.419 } 00:05:45.419 } 00:05:45.419 ]' 00:05:45.419 06:46:06 -- rpc/rpc.sh@21 -- # jq length 00:05:45.419 06:46:06 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.419 06:46:06 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.419 06:46:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.419 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.419 06:46:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.419 06:46:06 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.419 06:46:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.419 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.419 06:46:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.419 06:46:06 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.419 06:46:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.419 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:05:45.419 06:46:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.419 06:46:06 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.419 06:46:06 -- rpc/rpc.sh@26 -- # jq length 00:05:45.419 06:46:07 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.419 00:05:45.419 real 0m0.269s 00:05:45.419 user 0m0.161s 00:05:45.419 sys 0m0.046s 00:05:45.419 06:46:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.419 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:45.419 ************************************ 00:05:45.419 END TEST rpc_integrity 00:05:45.419 ************************************ 00:05:45.419 06:46:07 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:45.419 06:46:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.419 06:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.419 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:45.419 ************************************ 00:05:45.419 START TEST rpc_plugins 00:05:45.419 ************************************ 00:05:45.419 06:46:07 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:45.419 06:46:07 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:45.419 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.419 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:45.677 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.677 06:46:07 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:45.677 06:46:07 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:45.677 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.677 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:45.677 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.677 06:46:07 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.677 { 00:05:45.677 "name": "Malloc1", 00:05:45.677 "aliases": [ 00:05:45.677 "3a0fe113-f1b2-41ed-a9e8-b2bda71e1993" 00:05:45.677 ], 00:05:45.677 "product_name": "Malloc disk", 00:05:45.677 "block_size": 4096, 00:05:45.677 "num_blocks": 256, 00:05:45.677 "uuid": "3a0fe113-f1b2-41ed-a9e8-b2bda71e1993", 00:05:45.678 "assigned_rate_limits": { 00:05:45.678 "rw_ios_per_sec": 0, 00:05:45.678 "rw_mbytes_per_sec": 0, 00:05:45.678 "r_mbytes_per_sec": 0, 00:05:45.678 "w_mbytes_per_sec": 0 00:05:45.678 }, 00:05:45.678 "claimed": false, 00:05:45.678 "zoned": false, 00:05:45.678 "supported_io_types": { 00:05:45.678 "read": true, 00:05:45.678 "write": true, 00:05:45.678 "unmap": true, 00:05:45.678 "write_zeroes": true, 00:05:45.678 "flush": true, 00:05:45.678 "reset": true, 00:05:45.678 "compare": false, 00:05:45.678 "compare_and_write": false, 00:05:45.678 "abort": true, 00:05:45.678 "nvme_admin": false, 00:05:45.678 "nvme_io": false 00:05:45.678 }, 00:05:45.678 "memory_domains": [ 00:05:45.678 { 00:05:45.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.678 "dma_device_type": 2 00:05:45.678 } 00:05:45.678 ], 00:05:45.678 "driver_specific": {} 00:05:45.678 } 00:05:45.678 ]' 00:05:45.678 06:46:07 -- rpc/rpc.sh@32 -- # jq length 00:05:45.678 06:46:07 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:45.678 06:46:07 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:45.678 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.678 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:45.678 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.678 06:46:07 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:45.678 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.678 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:45.678 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.678 06:46:07 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:45.678 06:46:07 -- rpc/rpc.sh@36 -- # jq length 00:05:45.678 06:46:07 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:45.678 00:05:45.678 real 0m0.148s 00:05:45.678 user 0m0.085s 00:05:45.678 sys 0m0.026s 00:05:45.678 06:46:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.678 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:45.678 ************************************ 00:05:45.678 END TEST rpc_plugins 00:05:45.678 ************************************ 00:05:45.678 06:46:07 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.678 06:46:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.678 06:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.678 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:45.678 ************************************ 00:05:45.678 START TEST rpc_trace_cmd_test 00:05:45.678 ************************************ 00:05:45.678 06:46:07 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:45.678 06:46:07 -- rpc/rpc.sh@40 -- # local info 00:05:45.678 06:46:07 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:45.678 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.678 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:45.678 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.678 06:46:07 -- rpc/rpc.sh@42 -- # info='{ 00:05:45.678 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1176646", 00:05:45.678 "tpoint_group_mask": "0x8", 00:05:45.678 "iscsi_conn": { 00:05:45.678 "mask": "0x2", 00:05:45.678 "tpoint_mask": "0x0" 00:05:45.678 }, 00:05:45.678 "scsi": { 00:05:45.678 "mask": "0x4", 00:05:45.678 "tpoint_mask": "0x0" 00:05:45.678 }, 00:05:45.678 "bdev": { 00:05:45.678 "mask": "0x8", 00:05:45.678 "tpoint_mask": "0xffffffffffffffff" 00:05:45.678 }, 00:05:45.678 "nvmf_rdma": { 00:05:45.678 "mask": "0x10", 00:05:45.678 "tpoint_mask": "0x0" 00:05:45.678 }, 00:05:45.678 "nvmf_tcp": { 00:05:45.678 "mask": "0x20", 00:05:45.678 "tpoint_mask": "0x0" 00:05:45.678 }, 00:05:45.678 "ftl": { 00:05:45.678 "mask": "0x40", 00:05:45.678 "tpoint_mask": "0x0" 00:05:45.678 }, 00:05:45.678 "blobfs": { 00:05:45.678 "mask": "0x80", 00:05:45.678 "tpoint_mask": "0x0" 00:05:45.678 }, 00:05:45.678 "dsa": { 00:05:45.678 "mask": "0x200", 00:05:45.678 "tpoint_mask": "0x0" 00:05:45.678 }, 00:05:45.678 "thread": { 00:05:45.678 "mask": "0x400", 00:05:45.678 "tpoint_mask": "0x0" 00:05:45.678 }, 00:05:45.678 "nvme_pcie": { 00:05:45.678 "mask": "0x800", 00:05:45.678 "tpoint_mask": "0x0" 00:05:45.678 }, 00:05:45.678 "iaa": { 00:05:45.678 "mask": "0x1000", 00:05:45.678 "tpoint_mask": "0x0" 00:05:45.678 }, 00:05:45.678 "nvme_tcp": { 00:05:45.678 "mask": "0x2000", 00:05:45.678 "tpoint_mask": "0x0" 00:05:45.678 }, 00:05:45.678 "bdev_nvme": { 00:05:45.678 "mask": "0x4000", 00:05:45.678 "tpoint_mask": "0x0" 00:05:45.678 } 00:05:45.678 }' 00:05:45.678 06:46:07 -- rpc/rpc.sh@43 -- # jq length 00:05:45.678 06:46:07 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:45.678 06:46:07 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:45.937 06:46:07 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:45.937 06:46:07 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:45.937 06:46:07 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:45.937 06:46:07 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:45.937 06:46:07 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:45.937 06:46:07 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:45.937 06:46:07 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:45.937 00:05:45.937 real 0m0.225s 00:05:45.937 user 0m0.189s 00:05:45.937 sys 0m0.029s 00:05:45.937 06:46:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.937 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:45.937 ************************************ 00:05:45.937 END TEST rpc_trace_cmd_test 00:05:45.937 ************************************ 00:05:45.937 06:46:07 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:45.937 06:46:07 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:45.937 06:46:07 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:45.937 06:46:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.937 06:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.937 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:45.937 ************************************ 00:05:45.937 START TEST rpc_daemon_integrity 00:05:45.937 ************************************ 00:05:45.937 06:46:07 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:45.937 06:46:07 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.937 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.937 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:45.937 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.937 06:46:07 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.937 06:46:07 -- rpc/rpc.sh@13 -- # jq length 00:05:45.937 06:46:07 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.937 06:46:07 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.196 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.196 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:46.196 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.196 06:46:07 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.196 06:46:07 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.196 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.196 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:46.196 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.196 06:46:07 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.196 { 00:05:46.196 "name": "Malloc2", 00:05:46.196 "aliases": [ 00:05:46.196 "bdd2238c-8155-4d09-9bba-7caf2e95ad07" 00:05:46.196 ], 00:05:46.196 "product_name": "Malloc disk", 00:05:46.196 "block_size": 512, 00:05:46.196 "num_blocks": 16384, 00:05:46.196 "uuid": "bdd2238c-8155-4d09-9bba-7caf2e95ad07", 00:05:46.196 "assigned_rate_limits": { 00:05:46.196 "rw_ios_per_sec": 0, 00:05:46.196 "rw_mbytes_per_sec": 0, 00:05:46.196 "r_mbytes_per_sec": 0, 00:05:46.196 "w_mbytes_per_sec": 0 00:05:46.196 }, 00:05:46.196 "claimed": false, 00:05:46.196 "zoned": false, 00:05:46.196 "supported_io_types": { 00:05:46.196 "read": true, 00:05:46.196 "write": true, 00:05:46.196 "unmap": true, 00:05:46.196 "write_zeroes": true, 00:05:46.196 "flush": true, 00:05:46.196 "reset": true, 00:05:46.196 "compare": false, 00:05:46.196 "compare_and_write": false, 00:05:46.196 "abort": true, 00:05:46.196 "nvme_admin": false, 00:05:46.196 "nvme_io": false 00:05:46.196 }, 00:05:46.196 "memory_domains": [ 00:05:46.196 { 00:05:46.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.196 "dma_device_type": 2 00:05:46.196 } 00:05:46.196 ], 00:05:46.196 "driver_specific": {} 00:05:46.196 } 00:05:46.196 ]' 00:05:46.196 06:46:07 -- rpc/rpc.sh@17 -- # jq length 00:05:46.196 06:46:07 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.196 06:46:07 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.196 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.196 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:46.196 [2024-12-15 06:46:07.657923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.196 [2024-12-15 06:46:07.657952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.196 [2024-12-15 06:46:07.657967] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11a4a20 00:05:46.196 [2024-12-15 06:46:07.657979] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.196 [2024-12-15 06:46:07.658860] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.196 [2024-12-15 06:46:07.658883] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.196 Passthru0 00:05:46.196 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.196 06:46:07 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.196 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.196 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:46.196 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.196 06:46:07 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.196 { 00:05:46.196 "name": "Malloc2", 00:05:46.196 "aliases": [ 00:05:46.196 "bdd2238c-8155-4d09-9bba-7caf2e95ad07" 00:05:46.196 ], 00:05:46.196 "product_name": "Malloc disk", 00:05:46.196 "block_size": 512, 00:05:46.196 "num_blocks": 16384, 00:05:46.197 "uuid": "bdd2238c-8155-4d09-9bba-7caf2e95ad07", 00:05:46.197 "assigned_rate_limits": { 00:05:46.197 "rw_ios_per_sec": 0, 00:05:46.197 "rw_mbytes_per_sec": 0, 00:05:46.197 "r_mbytes_per_sec": 0, 00:05:46.197 "w_mbytes_per_sec": 0 00:05:46.197 }, 00:05:46.197 "claimed": true, 00:05:46.197 "claim_type": "exclusive_write", 00:05:46.197 "zoned": false, 00:05:46.197 "supported_io_types": { 00:05:46.197 "read": true, 00:05:46.197 "write": true, 00:05:46.197 "unmap": true, 00:05:46.197 "write_zeroes": true, 00:05:46.197 "flush": true, 00:05:46.197 "reset": true, 00:05:46.197 "compare": false, 00:05:46.197 "compare_and_write": false, 00:05:46.197 "abort": true, 00:05:46.197 "nvme_admin": false, 00:05:46.197 "nvme_io": false 00:05:46.197 }, 00:05:46.197 "memory_domains": [ 00:05:46.197 { 00:05:46.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.197 "dma_device_type": 2 00:05:46.197 } 00:05:46.197 ], 00:05:46.197 "driver_specific": {} 00:05:46.197 }, 00:05:46.197 { 00:05:46.197 "name": "Passthru0", 00:05:46.197 "aliases": [ 00:05:46.197 "181448f2-8a96-5162-8270-f3dcef47a3a0" 00:05:46.197 ], 00:05:46.197 "product_name": "passthru", 00:05:46.197 "block_size": 512, 00:05:46.197 "num_blocks": 16384, 00:05:46.197 "uuid": "181448f2-8a96-5162-8270-f3dcef47a3a0", 00:05:46.197 "assigned_rate_limits": { 00:05:46.197 "rw_ios_per_sec": 0, 00:05:46.197 "rw_mbytes_per_sec": 0, 00:05:46.197 "r_mbytes_per_sec": 0, 00:05:46.197 "w_mbytes_per_sec": 0 00:05:46.197 }, 00:05:46.197 "claimed": false, 00:05:46.197 "zoned": false, 00:05:46.197 "supported_io_types": { 00:05:46.197 "read": true, 00:05:46.197 "write": true, 00:05:46.197 "unmap": true, 00:05:46.197 "write_zeroes": true, 00:05:46.197 "flush": true, 00:05:46.197 "reset": true, 00:05:46.197 "compare": false, 00:05:46.197 "compare_and_write": false, 00:05:46.197 "abort": true, 00:05:46.197 "nvme_admin": false, 00:05:46.197 "nvme_io": false 00:05:46.197 }, 00:05:46.197 "memory_domains": [ 00:05:46.197 { 00:05:46.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.197 "dma_device_type": 2 00:05:46.197 } 00:05:46.197 ], 00:05:46.197 "driver_specific": { 00:05:46.197 "passthru": { 00:05:46.197 "name": "Passthru0", 00:05:46.197 "base_bdev_name": "Malloc2" 00:05:46.197 } 00:05:46.197 } 00:05:46.197 } 00:05:46.197 ]' 00:05:46.197 06:46:07 -- rpc/rpc.sh@21 -- # jq length 00:05:46.197 06:46:07 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.197 06:46:07 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.197 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.197 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:46.197 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.197 06:46:07 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:46.197 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.197 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:46.197 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.197 06:46:07 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.197 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.197 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:46.197 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.197 06:46:07 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.197 06:46:07 -- rpc/rpc.sh@26 -- # jq length 00:05:46.197 06:46:07 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.197 00:05:46.197 real 0m0.289s 00:05:46.197 user 0m0.180s 00:05:46.197 sys 0m0.047s 00:05:46.197 06:46:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.197 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:05:46.197 ************************************ 00:05:46.197 END TEST rpc_daemon_integrity 00:05:46.197 ************************************ 00:05:46.456 06:46:07 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.456 06:46:07 -- rpc/rpc.sh@84 -- # killprocess 1176646 00:05:46.456 06:46:07 -- common/autotest_common.sh@936 -- # '[' -z 1176646 ']' 00:05:46.456 06:46:07 -- common/autotest_common.sh@940 -- # kill -0 1176646 00:05:46.456 06:46:07 -- common/autotest_common.sh@941 -- # uname 00:05:46.456 06:46:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.457 06:46:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1176646 00:05:46.457 06:46:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.457 06:46:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.457 06:46:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1176646' 00:05:46.457 killing process with pid 1176646 00:05:46.457 06:46:07 -- common/autotest_common.sh@955 -- # kill 1176646 00:05:46.457 06:46:07 -- common/autotest_common.sh@960 -- # wait 1176646 00:05:46.716 00:05:46.716 real 0m2.523s 00:05:46.716 user 0m3.145s 00:05:46.716 sys 0m0.782s 00:05:46.716 06:46:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.716 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:05:46.716 ************************************ 00:05:46.716 END TEST rpc 00:05:46.716 ************************************ 00:05:46.716 06:46:08 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:46.716 06:46:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.716 06:46:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.716 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:05:46.716 ************************************ 00:05:46.716 START TEST rpc_client 00:05:46.716 ************************************ 00:05:46.716 06:46:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:46.716 * Looking for test storage... 00:05:46.716 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:46.976 06:46:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:46.976 06:46:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:46.976 06:46:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:46.976 06:46:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:46.976 06:46:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:46.976 06:46:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:46.976 06:46:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:46.976 06:46:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:46.976 06:46:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:46.976 06:46:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.976 06:46:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:46.976 06:46:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:46.976 06:46:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:46.976 06:46:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:46.976 06:46:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:46.976 06:46:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:46.976 06:46:08 -- scripts/common.sh@344 -- # : 1 00:05:46.976 06:46:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:46.976 06:46:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.976 06:46:08 -- scripts/common.sh@364 -- # decimal 1 00:05:46.976 06:46:08 -- scripts/common.sh@352 -- # local d=1 00:05:46.976 06:46:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.976 06:46:08 -- scripts/common.sh@354 -- # echo 1 00:05:46.976 06:46:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:46.976 06:46:08 -- scripts/common.sh@365 -- # decimal 2 00:05:46.976 06:46:08 -- scripts/common.sh@352 -- # local d=2 00:05:46.976 06:46:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.976 06:46:08 -- scripts/common.sh@354 -- # echo 2 00:05:46.976 06:46:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:46.976 06:46:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:46.976 06:46:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:46.976 06:46:08 -- scripts/common.sh@367 -- # return 0 00:05:46.976 06:46:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.976 06:46:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:46.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.976 --rc genhtml_branch_coverage=1 00:05:46.976 --rc genhtml_function_coverage=1 00:05:46.976 --rc genhtml_legend=1 00:05:46.976 --rc geninfo_all_blocks=1 00:05:46.976 --rc geninfo_unexecuted_blocks=1 00:05:46.976 00:05:46.976 ' 00:05:46.976 06:46:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:46.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.976 --rc genhtml_branch_coverage=1 00:05:46.976 --rc genhtml_function_coverage=1 00:05:46.976 --rc genhtml_legend=1 00:05:46.976 --rc geninfo_all_blocks=1 00:05:46.976 --rc geninfo_unexecuted_blocks=1 00:05:46.976 00:05:46.976 ' 00:05:46.976 06:46:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:46.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.976 --rc genhtml_branch_coverage=1 00:05:46.976 --rc genhtml_function_coverage=1 00:05:46.976 --rc genhtml_legend=1 00:05:46.976 --rc geninfo_all_blocks=1 00:05:46.976 --rc geninfo_unexecuted_blocks=1 00:05:46.976 00:05:46.976 ' 00:05:46.976 06:46:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:46.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.976 --rc genhtml_branch_coverage=1 00:05:46.976 --rc genhtml_function_coverage=1 00:05:46.976 --rc genhtml_legend=1 00:05:46.976 --rc geninfo_all_blocks=1 00:05:46.976 --rc geninfo_unexecuted_blocks=1 00:05:46.976 00:05:46.976 ' 00:05:46.976 06:46:08 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:46.976 OK 00:05:46.976 06:46:08 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:46.976 00:05:46.976 real 0m0.213s 00:05:46.976 user 0m0.111s 00:05:46.976 sys 0m0.119s 00:05:46.976 06:46:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.976 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:05:46.976 ************************************ 00:05:46.976 END TEST rpc_client 00:05:46.976 ************************************ 00:05:46.976 06:46:08 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:46.976 06:46:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.976 06:46:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.976 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:05:46.976 ************************************ 00:05:46.976 START TEST json_config 00:05:46.976 ************************************ 00:05:46.976 06:46:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:46.976 06:46:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:46.976 06:46:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:46.976 06:46:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:47.236 06:46:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:47.236 06:46:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:47.236 06:46:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:47.236 06:46:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:47.236 06:46:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:47.236 06:46:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:47.236 06:46:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.236 06:46:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:47.236 06:46:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:47.236 06:46:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:47.236 06:46:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:47.236 06:46:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:47.236 06:46:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:47.236 06:46:08 -- scripts/common.sh@344 -- # : 1 00:05:47.236 06:46:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:47.236 06:46:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.236 06:46:08 -- scripts/common.sh@364 -- # decimal 1 00:05:47.236 06:46:08 -- scripts/common.sh@352 -- # local d=1 00:05:47.236 06:46:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.236 06:46:08 -- scripts/common.sh@354 -- # echo 1 00:05:47.236 06:46:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:47.236 06:46:08 -- scripts/common.sh@365 -- # decimal 2 00:05:47.236 06:46:08 -- scripts/common.sh@352 -- # local d=2 00:05:47.236 06:46:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.236 06:46:08 -- scripts/common.sh@354 -- # echo 2 00:05:47.236 06:46:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:47.236 06:46:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:47.236 06:46:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:47.236 06:46:08 -- scripts/common.sh@367 -- # return 0 00:05:47.236 06:46:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.236 06:46:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:47.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.236 --rc genhtml_branch_coverage=1 00:05:47.236 --rc genhtml_function_coverage=1 00:05:47.236 --rc genhtml_legend=1 00:05:47.236 --rc geninfo_all_blocks=1 00:05:47.236 --rc geninfo_unexecuted_blocks=1 00:05:47.236 00:05:47.236 ' 00:05:47.236 06:46:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:47.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.236 --rc genhtml_branch_coverage=1 00:05:47.236 --rc genhtml_function_coverage=1 00:05:47.236 --rc genhtml_legend=1 00:05:47.236 --rc geninfo_all_blocks=1 00:05:47.236 --rc geninfo_unexecuted_blocks=1 00:05:47.236 00:05:47.236 ' 00:05:47.236 06:46:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:47.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.236 --rc genhtml_branch_coverage=1 00:05:47.236 --rc genhtml_function_coverage=1 00:05:47.236 --rc genhtml_legend=1 00:05:47.236 --rc geninfo_all_blocks=1 00:05:47.236 --rc geninfo_unexecuted_blocks=1 00:05:47.236 00:05:47.236 ' 00:05:47.236 06:46:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:47.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.236 --rc genhtml_branch_coverage=1 00:05:47.236 --rc genhtml_function_coverage=1 00:05:47.236 --rc genhtml_legend=1 00:05:47.236 --rc geninfo_all_blocks=1 00:05:47.236 --rc geninfo_unexecuted_blocks=1 00:05:47.236 00:05:47.236 ' 00:05:47.236 06:46:08 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.236 06:46:08 -- nvmf/common.sh@7 -- # uname -s 00:05:47.236 06:46:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.237 06:46:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.237 06:46:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.237 06:46:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.237 06:46:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.237 06:46:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.237 06:46:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.237 06:46:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.237 06:46:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.237 06:46:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.237 06:46:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:47.237 06:46:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:47.237 06:46:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.237 06:46:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.237 06:46:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.237 06:46:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:47.237 06:46:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.237 06:46:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.237 06:46:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.237 06:46:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.237 06:46:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.237 06:46:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.237 06:46:08 -- paths/export.sh@5 -- # export PATH 00:05:47.237 06:46:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.237 06:46:08 -- nvmf/common.sh@46 -- # : 0 00:05:47.237 06:46:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:47.237 06:46:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:47.237 06:46:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:47.237 06:46:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.237 06:46:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.237 06:46:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:47.237 06:46:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:47.237 06:46:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:47.237 06:46:08 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:47.237 06:46:08 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:47.237 06:46:08 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:47.237 06:46:08 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:47.237 06:46:08 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:47.237 06:46:08 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:47.237 06:46:08 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:47.237 06:46:08 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:47.237 06:46:08 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:47.237 06:46:08 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:47.237 06:46:08 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:47.237 06:46:08 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:47.237 06:46:08 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:47.237 06:46:08 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:47.237 06:46:08 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:47.237 INFO: JSON configuration test init 00:05:47.237 06:46:08 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:47.237 06:46:08 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:47.237 06:46:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.237 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:05:47.237 06:46:08 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:47.237 06:46:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.237 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:05:47.237 06:46:08 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:47.237 06:46:08 -- json_config/json_config.sh@98 -- # local app=target 00:05:47.237 06:46:08 -- json_config/json_config.sh@99 -- # shift 00:05:47.237 06:46:08 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:47.237 06:46:08 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:47.237 06:46:08 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:47.237 06:46:08 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:47.237 06:46:08 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:47.237 06:46:08 -- json_config/json_config.sh@111 -- # app_pid[$app]=1177277 00:05:47.237 06:46:08 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:47.237 Waiting for target to run... 00:05:47.237 06:46:08 -- json_config/json_config.sh@114 -- # waitforlisten 1177277 /var/tmp/spdk_tgt.sock 00:05:47.237 06:46:08 -- common/autotest_common.sh@829 -- # '[' -z 1177277 ']' 00:05:47.237 06:46:08 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:47.237 06:46:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.237 06:46:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.237 06:46:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.237 06:46:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.237 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:05:47.237 [2024-12-15 06:46:08.776561] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:47.237 [2024-12-15 06:46:08.776621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177277 ] 00:05:47.237 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.497 [2024-12-15 06:46:09.082053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.497 [2024-12-15 06:46:09.102934] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.497 [2024-12-15 06:46:09.103037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.065 06:46:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.065 06:46:09 -- common/autotest_common.sh@862 -- # return 0 00:05:48.065 06:46:09 -- json_config/json_config.sh@115 -- # echo '' 00:05:48.065 00:05:48.065 06:46:09 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:48.065 06:46:09 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:48.065 06:46:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.065 06:46:09 -- common/autotest_common.sh@10 -- # set +x 00:05:48.065 06:46:09 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:48.065 06:46:09 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:48.065 06:46:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.065 06:46:09 -- common/autotest_common.sh@10 -- # set +x 00:05:48.065 06:46:09 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:48.065 06:46:09 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:48.065 06:46:09 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:51.354 06:46:12 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:51.354 06:46:12 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:51.354 06:46:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.354 06:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:51.354 06:46:12 -- json_config/json_config.sh@48 -- # local ret=0 00:05:51.354 06:46:12 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:51.354 06:46:12 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:51.354 06:46:12 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:51.354 06:46:12 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:51.354 06:46:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:51.354 06:46:12 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:51.354 06:46:12 -- json_config/json_config.sh@51 -- # local get_types 00:05:51.354 06:46:12 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:51.354 06:46:12 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:51.354 06:46:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.354 06:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:51.354 06:46:12 -- json_config/json_config.sh@58 -- # return 0 00:05:51.354 06:46:12 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:51.354 06:46:12 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:51.354 06:46:12 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:51.354 06:46:12 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:51.354 06:46:12 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:51.354 06:46:12 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:51.354 06:46:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.354 06:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:51.354 06:46:12 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:51.354 06:46:12 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:05:51.354 06:46:12 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:05:51.354 06:46:12 -- json_config/json_config.sh@287 -- # nvmftestinit 00:05:51.354 06:46:12 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:05:51.354 06:46:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:51.354 06:46:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:05:51.354 06:46:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:05:51.354 06:46:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:05:51.354 06:46:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.354 06:46:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:51.354 06:46:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.354 06:46:12 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:05:51.354 06:46:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:05:51.354 06:46:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:05:51.354 06:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:59.477 06:46:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:59.477 06:46:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:05:59.477 06:46:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:05:59.477 06:46:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:05:59.477 06:46:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:05:59.477 06:46:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:05:59.477 06:46:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:05:59.477 06:46:19 -- nvmf/common.sh@294 -- # net_devs=() 00:05:59.477 06:46:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:05:59.477 06:46:19 -- nvmf/common.sh@295 -- # e810=() 00:05:59.477 06:46:19 -- nvmf/common.sh@295 -- # local -ga e810 00:05:59.477 06:46:19 -- nvmf/common.sh@296 -- # x722=() 00:05:59.477 06:46:19 -- nvmf/common.sh@296 -- # local -ga x722 00:05:59.477 06:46:19 -- nvmf/common.sh@297 -- # mlx=() 00:05:59.477 06:46:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:05:59.477 06:46:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:59.477 06:46:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:59.477 06:46:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:59.477 06:46:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:59.477 06:46:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:59.477 06:46:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:59.477 06:46:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:59.477 06:46:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:59.477 06:46:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:59.477 06:46:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:59.477 06:46:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:59.477 06:46:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:05:59.477 06:46:19 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:05:59.477 06:46:19 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:05:59.477 06:46:19 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:05:59.477 06:46:19 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:05:59.477 06:46:19 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:05:59.477 06:46:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:05:59.477 06:46:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:59.477 06:46:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:59.477 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:59.477 06:46:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:59.477 06:46:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:59.477 06:46:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:59.477 06:46:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:59.477 06:46:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:59.477 06:46:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:59.477 06:46:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:59.477 06:46:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:59.478 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:59.478 06:46:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:59.478 06:46:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:05:59.478 06:46:19 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:59.478 06:46:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:59.478 06:46:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:59.478 06:46:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:59.478 06:46:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:59.478 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:59.478 06:46:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:59.478 06:46:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:59.478 06:46:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:59.478 06:46:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:59.478 06:46:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:59.478 06:46:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:59.478 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:59.478 06:46:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:59.478 06:46:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:05:59.478 06:46:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:05:59.478 06:46:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@408 -- # rdma_device_init 00:05:59.478 06:46:19 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:05:59.478 06:46:19 -- nvmf/common.sh@57 -- # uname 00:05:59.478 06:46:19 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:05:59.478 06:46:19 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:05:59.478 06:46:19 -- nvmf/common.sh@62 -- # modprobe ib_core 00:05:59.478 06:46:19 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:05:59.478 06:46:19 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:05:59.478 06:46:19 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:05:59.478 06:46:19 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:05:59.478 06:46:19 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:05:59.478 06:46:19 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:05:59.478 06:46:19 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:59.478 06:46:19 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:05:59.478 06:46:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:59.478 06:46:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:59.478 06:46:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:59.478 06:46:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:59.478 06:46:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:59.478 06:46:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:59.478 06:46:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.478 06:46:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:59.478 06:46:19 -- nvmf/common.sh@104 -- # continue 2 00:05:59.478 06:46:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:59.478 06:46:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.478 06:46:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.478 06:46:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:59.478 06:46:19 -- nvmf/common.sh@104 -- # continue 2 00:05:59.478 06:46:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:59.478 06:46:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:05:59.478 06:46:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:59.478 06:46:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:59.478 06:46:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:59.478 06:46:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:59.478 06:46:19 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:05:59.478 06:46:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:05:59.478 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:59.478 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:59.478 altname enp217s0f0np0 00:05:59.478 altname ens818f0np0 00:05:59.478 inet 192.168.100.8/24 scope global mlx_0_0 00:05:59.478 valid_lft forever preferred_lft forever 00:05:59.478 06:46:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:59.478 06:46:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:05:59.478 06:46:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:59.478 06:46:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:59.478 06:46:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:59.478 06:46:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:59.478 06:46:19 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:05:59.478 06:46:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:05:59.478 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:59.478 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:59.478 altname enp217s0f1np1 00:05:59.478 altname ens818f1np1 00:05:59.478 inet 192.168.100.9/24 scope global mlx_0_1 00:05:59.478 valid_lft forever preferred_lft forever 00:05:59.478 06:46:19 -- nvmf/common.sh@410 -- # return 0 00:05:59.478 06:46:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:05:59.478 06:46:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:59.478 06:46:19 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:05:59.478 06:46:19 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:05:59.478 06:46:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:59.478 06:46:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:59.478 06:46:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:59.478 06:46:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:59.478 06:46:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:59.478 06:46:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:59.478 06:46:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.478 06:46:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:59.478 06:46:19 -- nvmf/common.sh@104 -- # continue 2 00:05:59.478 06:46:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:59.478 06:46:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.478 06:46:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.478 06:46:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:59.478 06:46:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:59.478 06:46:19 -- nvmf/common.sh@104 -- # continue 2 00:05:59.478 06:46:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:59.478 06:46:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:05:59.478 06:46:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:59.478 06:46:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:59.478 06:46:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:59.478 06:46:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:59.478 06:46:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:59.478 06:46:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:05:59.478 06:46:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:59.478 06:46:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:59.478 06:46:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:59.478 06:46:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:59.478 06:46:19 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:05:59.478 192.168.100.9' 00:05:59.478 06:46:19 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:05:59.478 192.168.100.9' 00:05:59.478 06:46:19 -- nvmf/common.sh@445 -- # head -n 1 00:05:59.478 06:46:19 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:59.478 06:46:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:59.478 192.168.100.9' 00:05:59.478 06:46:19 -- nvmf/common.sh@446 -- # tail -n +2 00:05:59.478 06:46:19 -- nvmf/common.sh@446 -- # head -n 1 00:05:59.478 06:46:19 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:59.478 06:46:19 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:05:59.478 06:46:19 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:59.478 06:46:19 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:05:59.478 06:46:19 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:05:59.478 06:46:19 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:05:59.478 06:46:20 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:05:59.478 06:46:20 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:59.478 06:46:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:59.478 MallocForNvmf0 00:05:59.478 06:46:20 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:59.478 06:46:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:59.479 MallocForNvmf1 00:05:59.479 06:46:20 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:59.479 06:46:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:59.479 [2024-12-15 06:46:20.556091] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:59.479 [2024-12-15 06:46:20.584460] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1496560/0x14a31c0) succeed. 00:05:59.479 [2024-12-15 06:46:20.597028] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1498700/0x14e4860) succeed. 00:05:59.479 06:46:20 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:59.479 06:46:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:59.479 06:46:20 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:59.479 06:46:20 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:59.479 06:46:21 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:59.479 06:46:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:59.738 06:46:21 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:59.738 06:46:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:59.738 [2024-12-15 06:46:21.345811] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:59.738 06:46:21 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:59.738 06:46:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.738 06:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.997 06:46:21 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:59.997 06:46:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.997 06:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.997 06:46:21 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:59.997 06:46:21 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:59.997 06:46:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:59.997 MallocBdevForConfigChangeCheck 00:06:00.257 06:46:21 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:00.257 06:46:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.257 06:46:21 -- common/autotest_common.sh@10 -- # set +x 00:06:00.257 06:46:21 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:00.257 06:46:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.516 06:46:21 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:00.516 INFO: shutting down applications... 00:06:00.516 06:46:21 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:00.516 06:46:21 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:00.516 06:46:21 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:00.516 06:46:21 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:03.051 Calling clear_iscsi_subsystem 00:06:03.051 Calling clear_nvmf_subsystem 00:06:03.051 Calling clear_nbd_subsystem 00:06:03.051 Calling clear_ublk_subsystem 00:06:03.051 Calling clear_vhost_blk_subsystem 00:06:03.051 Calling clear_vhost_scsi_subsystem 00:06:03.051 Calling clear_scheduler_subsystem 00:06:03.051 Calling clear_bdev_subsystem 00:06:03.051 Calling clear_accel_subsystem 00:06:03.051 Calling clear_vmd_subsystem 00:06:03.051 Calling clear_sock_subsystem 00:06:03.051 Calling clear_iobuf_subsystem 00:06:03.051 06:46:24 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:03.051 06:46:24 -- json_config/json_config.sh@396 -- # count=100 00:06:03.051 06:46:24 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:03.051 06:46:24 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.051 06:46:24 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:03.051 06:46:24 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:03.310 06:46:24 -- json_config/json_config.sh@398 -- # break 00:06:03.310 06:46:24 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:03.310 06:46:24 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:03.310 06:46:24 -- json_config/json_config.sh@120 -- # local app=target 00:06:03.310 06:46:24 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:03.310 06:46:24 -- json_config/json_config.sh@124 -- # [[ -n 1177277 ]] 00:06:03.310 06:46:24 -- json_config/json_config.sh@127 -- # kill -SIGINT 1177277 00:06:03.310 06:46:24 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:03.310 06:46:24 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:03.310 06:46:24 -- json_config/json_config.sh@130 -- # kill -0 1177277 00:06:03.310 06:46:24 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:03.878 06:46:25 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:03.879 06:46:25 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:03.879 06:46:25 -- json_config/json_config.sh@130 -- # kill -0 1177277 00:06:03.879 06:46:25 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:03.879 06:46:25 -- json_config/json_config.sh@132 -- # break 00:06:03.879 06:46:25 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:03.879 06:46:25 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:03.879 SPDK target shutdown done 00:06:03.879 06:46:25 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:03.879 INFO: relaunching applications... 00:06:03.879 06:46:25 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.879 06:46:25 -- json_config/json_config.sh@98 -- # local app=target 00:06:03.879 06:46:25 -- json_config/json_config.sh@99 -- # shift 00:06:03.879 06:46:25 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:03.879 06:46:25 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:03.879 06:46:25 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:03.879 06:46:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:03.879 06:46:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:03.879 06:46:25 -- json_config/json_config.sh@111 -- # app_pid[$app]=1182353 00:06:03.879 06:46:25 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:03.879 Waiting for target to run... 00:06:03.879 06:46:25 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.879 06:46:25 -- json_config/json_config.sh@114 -- # waitforlisten 1182353 /var/tmp/spdk_tgt.sock 00:06:03.879 06:46:25 -- common/autotest_common.sh@829 -- # '[' -z 1182353 ']' 00:06:03.879 06:46:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.879 06:46:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.879 06:46:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.879 06:46:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.879 06:46:25 -- common/autotest_common.sh@10 -- # set +x 00:06:03.879 [2024-12-15 06:46:25.384026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:03.879 [2024-12-15 06:46:25.384088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182353 ] 00:06:03.879 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.138 [2024-12-15 06:46:25.683831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.138 [2024-12-15 06:46:25.703935] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.138 [2024-12-15 06:46:25.704039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.428 [2024-12-15 06:46:28.734053] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1214fb0/0x11d33f0) succeed. 00:06:07.428 [2024-12-15 06:46:28.745522] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1217150/0x1080f90) succeed. 00:06:07.428 [2024-12-15 06:46:28.801893] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:07.428 06:46:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.428 06:46:28 -- common/autotest_common.sh@862 -- # return 0 00:06:07.428 06:46:28 -- json_config/json_config.sh@115 -- # echo '' 00:06:07.428 00:06:07.428 06:46:28 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:07.428 06:46:28 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:07.428 INFO: Checking if target configuration is the same... 00:06:07.428 06:46:28 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:07.428 06:46:28 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.428 06:46:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.428 + '[' 2 -ne 2 ']' 00:06:07.428 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:07.428 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:07.428 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:07.428 +++ basename /dev/fd/62 00:06:07.428 ++ mktemp /tmp/62.XXX 00:06:07.428 + tmp_file_1=/tmp/62.js8 00:06:07.428 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.428 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:07.428 + tmp_file_2=/tmp/spdk_tgt_config.json.xAQ 00:06:07.428 + ret=0 00:06:07.428 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.687 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.687 + diff -u /tmp/62.js8 /tmp/spdk_tgt_config.json.xAQ 00:06:07.687 + echo 'INFO: JSON config files are the same' 00:06:07.687 INFO: JSON config files are the same 00:06:07.687 + rm /tmp/62.js8 /tmp/spdk_tgt_config.json.xAQ 00:06:07.687 + exit 0 00:06:07.687 06:46:29 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:07.687 06:46:29 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:07.687 INFO: changing configuration and checking if this can be detected... 00:06:07.687 06:46:29 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:07.687 06:46:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:07.946 06:46:29 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.946 06:46:29 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:07.946 06:46:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.946 + '[' 2 -ne 2 ']' 00:06:07.946 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:07.946 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:07.946 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:07.946 +++ basename /dev/fd/62 00:06:07.946 ++ mktemp /tmp/62.XXX 00:06:07.946 + tmp_file_1=/tmp/62.x5m 00:06:07.946 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.946 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:07.946 + tmp_file_2=/tmp/spdk_tgt_config.json.swg 00:06:07.946 + ret=0 00:06:07.946 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:08.205 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:08.205 + diff -u /tmp/62.x5m /tmp/spdk_tgt_config.json.swg 00:06:08.205 + ret=1 00:06:08.205 + echo '=== Start of file: /tmp/62.x5m ===' 00:06:08.205 + cat /tmp/62.x5m 00:06:08.205 + echo '=== End of file: /tmp/62.x5m ===' 00:06:08.205 + echo '' 00:06:08.205 + echo '=== Start of file: /tmp/spdk_tgt_config.json.swg ===' 00:06:08.205 + cat /tmp/spdk_tgt_config.json.swg 00:06:08.205 + echo '=== End of file: /tmp/spdk_tgt_config.json.swg ===' 00:06:08.205 + echo '' 00:06:08.205 + rm /tmp/62.x5m /tmp/spdk_tgt_config.json.swg 00:06:08.205 + exit 1 00:06:08.205 06:46:29 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:08.205 INFO: configuration change detected. 00:06:08.206 06:46:29 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:08.206 06:46:29 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:08.206 06:46:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.206 06:46:29 -- common/autotest_common.sh@10 -- # set +x 00:06:08.206 06:46:29 -- json_config/json_config.sh@360 -- # local ret=0 00:06:08.206 06:46:29 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:08.206 06:46:29 -- json_config/json_config.sh@370 -- # [[ -n 1182353 ]] 00:06:08.206 06:46:29 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:08.206 06:46:29 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:08.206 06:46:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.206 06:46:29 -- common/autotest_common.sh@10 -- # set +x 00:06:08.206 06:46:29 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:08.206 06:46:29 -- json_config/json_config.sh@246 -- # uname -s 00:06:08.206 06:46:29 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:08.206 06:46:29 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:08.206 06:46:29 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:08.206 06:46:29 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:08.206 06:46:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.206 06:46:29 -- common/autotest_common.sh@10 -- # set +x 00:06:08.466 06:46:29 -- json_config/json_config.sh@376 -- # killprocess 1182353 00:06:08.466 06:46:29 -- common/autotest_common.sh@936 -- # '[' -z 1182353 ']' 00:06:08.466 06:46:29 -- common/autotest_common.sh@940 -- # kill -0 1182353 00:06:08.466 06:46:29 -- common/autotest_common.sh@941 -- # uname 00:06:08.466 06:46:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.466 06:46:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1182353 00:06:08.466 06:46:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.466 06:46:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.466 06:46:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1182353' 00:06:08.466 killing process with pid 1182353 00:06:08.466 06:46:29 -- common/autotest_common.sh@955 -- # kill 1182353 00:06:08.466 06:46:29 -- common/autotest_common.sh@960 -- # wait 1182353 00:06:11.095 06:46:32 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.095 06:46:32 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:11.095 06:46:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.095 06:46:32 -- common/autotest_common.sh@10 -- # set +x 00:06:11.095 06:46:32 -- json_config/json_config.sh@381 -- # return 0 00:06:11.095 06:46:32 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:11.095 INFO: Success 00:06:11.095 06:46:32 -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:11.095 06:46:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:11.095 06:46:32 -- nvmf/common.sh@116 -- # sync 00:06:11.095 06:46:32 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:06:11.095 06:46:32 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:06:11.095 06:46:32 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:06:11.095 06:46:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:11.095 06:46:32 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:06:11.095 00:06:11.095 real 0m23.856s 00:06:11.095 user 0m26.995s 00:06:11.095 sys 0m7.455s 00:06:11.095 06:46:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.095 06:46:32 -- common/autotest_common.sh@10 -- # set +x 00:06:11.095 ************************************ 00:06:11.095 END TEST json_config 00:06:11.095 ************************************ 00:06:11.095 06:46:32 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:11.095 06:46:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.095 06:46:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.095 06:46:32 -- common/autotest_common.sh@10 -- # set +x 00:06:11.095 ************************************ 00:06:11.095 START TEST json_config_extra_key 00:06:11.095 ************************************ 00:06:11.095 06:46:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:11.095 06:46:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:11.095 06:46:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:11.095 06:46:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:11.095 06:46:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:11.095 06:46:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:11.095 06:46:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:11.095 06:46:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:11.095 06:46:32 -- scripts/common.sh@335 -- # IFS=.-: 00:06:11.095 06:46:32 -- scripts/common.sh@335 -- # read -ra ver1 00:06:11.095 06:46:32 -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.095 06:46:32 -- scripts/common.sh@336 -- # read -ra ver2 00:06:11.095 06:46:32 -- scripts/common.sh@337 -- # local 'op=<' 00:06:11.095 06:46:32 -- scripts/common.sh@339 -- # ver1_l=2 00:06:11.095 06:46:32 -- scripts/common.sh@340 -- # ver2_l=1 00:06:11.095 06:46:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:11.095 06:46:32 -- scripts/common.sh@343 -- # case "$op" in 00:06:11.095 06:46:32 -- scripts/common.sh@344 -- # : 1 00:06:11.095 06:46:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:11.095 06:46:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.095 06:46:32 -- scripts/common.sh@364 -- # decimal 1 00:06:11.095 06:46:32 -- scripts/common.sh@352 -- # local d=1 00:06:11.095 06:46:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.095 06:46:32 -- scripts/common.sh@354 -- # echo 1 00:06:11.095 06:46:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:11.095 06:46:32 -- scripts/common.sh@365 -- # decimal 2 00:06:11.095 06:46:32 -- scripts/common.sh@352 -- # local d=2 00:06:11.095 06:46:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.095 06:46:32 -- scripts/common.sh@354 -- # echo 2 00:06:11.095 06:46:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:11.095 06:46:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:11.095 06:46:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:11.095 06:46:32 -- scripts/common.sh@367 -- # return 0 00:06:11.095 06:46:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.095 06:46:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:11.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.095 --rc genhtml_branch_coverage=1 00:06:11.095 --rc genhtml_function_coverage=1 00:06:11.095 --rc genhtml_legend=1 00:06:11.095 --rc geninfo_all_blocks=1 00:06:11.095 --rc geninfo_unexecuted_blocks=1 00:06:11.095 00:06:11.095 ' 00:06:11.095 06:46:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:11.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.095 --rc genhtml_branch_coverage=1 00:06:11.095 --rc genhtml_function_coverage=1 00:06:11.095 --rc genhtml_legend=1 00:06:11.095 --rc geninfo_all_blocks=1 00:06:11.096 --rc geninfo_unexecuted_blocks=1 00:06:11.096 00:06:11.096 ' 00:06:11.096 06:46:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:11.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.096 --rc genhtml_branch_coverage=1 00:06:11.096 --rc genhtml_function_coverage=1 00:06:11.096 --rc genhtml_legend=1 00:06:11.096 --rc geninfo_all_blocks=1 00:06:11.096 --rc geninfo_unexecuted_blocks=1 00:06:11.096 00:06:11.096 ' 00:06:11.096 06:46:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:11.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.096 --rc genhtml_branch_coverage=1 00:06:11.096 --rc genhtml_function_coverage=1 00:06:11.096 --rc genhtml_legend=1 00:06:11.096 --rc geninfo_all_blocks=1 00:06:11.096 --rc geninfo_unexecuted_blocks=1 00:06:11.096 00:06:11.096 ' 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.096 06:46:32 -- nvmf/common.sh@7 -- # uname -s 00:06:11.096 06:46:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.096 06:46:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.096 06:46:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.096 06:46:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.096 06:46:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.096 06:46:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.096 06:46:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.096 06:46:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.096 06:46:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.096 06:46:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.096 06:46:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:11.096 06:46:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:11.096 06:46:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.096 06:46:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.096 06:46:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.096 06:46:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:11.096 06:46:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.096 06:46:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.096 06:46:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.096 06:46:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.096 06:46:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.096 06:46:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.096 06:46:32 -- paths/export.sh@5 -- # export PATH 00:06:11.096 06:46:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.096 06:46:32 -- nvmf/common.sh@46 -- # : 0 00:06:11.096 06:46:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:11.096 06:46:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:11.096 06:46:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:11.096 06:46:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.096 06:46:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.096 06:46:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:11.096 06:46:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:11.096 06:46:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:11.096 INFO: launching applications... 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1183836 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:11.096 Waiting for target to run... 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1183836 /var/tmp/spdk_tgt.sock 00:06:11.096 06:46:32 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:11.096 06:46:32 -- common/autotest_common.sh@829 -- # '[' -z 1183836 ']' 00:06:11.096 06:46:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.096 06:46:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.096 06:46:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.096 06:46:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.096 06:46:32 -- common/autotest_common.sh@10 -- # set +x 00:06:11.096 [2024-12-15 06:46:32.667208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:11.096 [2024-12-15 06:46:32.667266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183836 ] 00:06:11.096 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.358 [2024-12-15 06:46:32.970505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.359 [2024-12-15 06:46:32.990162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.359 [2024-12-15 06:46:32.990267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.927 06:46:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.927 06:46:33 -- common/autotest_common.sh@862 -- # return 0 00:06:11.927 06:46:33 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:11.927 00:06:11.927 06:46:33 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:11.927 INFO: shutting down applications... 00:06:11.927 06:46:33 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:11.927 06:46:33 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:11.927 06:46:33 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:11.927 06:46:33 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1183836 ]] 00:06:11.927 06:46:33 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1183836 00:06:11.927 06:46:33 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:11.927 06:46:33 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:11.927 06:46:33 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1183836 00:06:11.927 06:46:33 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:12.496 06:46:33 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:12.496 06:46:33 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:12.496 06:46:33 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1183836 00:06:12.496 06:46:33 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:12.496 06:46:33 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:12.496 06:46:33 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:12.496 06:46:33 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:12.496 SPDK target shutdown done 00:06:12.496 06:46:33 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:12.496 Success 00:06:12.496 00:06:12.496 real 0m1.550s 00:06:12.496 user 0m1.240s 00:06:12.496 sys 0m0.458s 00:06:12.496 06:46:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.496 06:46:33 -- common/autotest_common.sh@10 -- # set +x 00:06:12.496 ************************************ 00:06:12.496 END TEST json_config_extra_key 00:06:12.496 ************************************ 00:06:12.496 06:46:34 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.496 06:46:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.496 06:46:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.496 06:46:34 -- common/autotest_common.sh@10 -- # set +x 00:06:12.496 ************************************ 00:06:12.496 START TEST alias_rpc 00:06:12.496 ************************************ 00:06:12.496 06:46:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.496 * Looking for test storage... 00:06:12.496 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:12.496 06:46:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:12.756 06:46:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:12.756 06:46:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:12.756 06:46:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:12.756 06:46:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:12.756 06:46:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:12.756 06:46:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:12.756 06:46:34 -- scripts/common.sh@335 -- # IFS=.-: 00:06:12.756 06:46:34 -- scripts/common.sh@335 -- # read -ra ver1 00:06:12.756 06:46:34 -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.756 06:46:34 -- scripts/common.sh@336 -- # read -ra ver2 00:06:12.756 06:46:34 -- scripts/common.sh@337 -- # local 'op=<' 00:06:12.756 06:46:34 -- scripts/common.sh@339 -- # ver1_l=2 00:06:12.756 06:46:34 -- scripts/common.sh@340 -- # ver2_l=1 00:06:12.756 06:46:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:12.756 06:46:34 -- scripts/common.sh@343 -- # case "$op" in 00:06:12.756 06:46:34 -- scripts/common.sh@344 -- # : 1 00:06:12.756 06:46:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:12.756 06:46:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.756 06:46:34 -- scripts/common.sh@364 -- # decimal 1 00:06:12.756 06:46:34 -- scripts/common.sh@352 -- # local d=1 00:06:12.756 06:46:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.756 06:46:34 -- scripts/common.sh@354 -- # echo 1 00:06:12.756 06:46:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:12.756 06:46:34 -- scripts/common.sh@365 -- # decimal 2 00:06:12.756 06:46:34 -- scripts/common.sh@352 -- # local d=2 00:06:12.756 06:46:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.756 06:46:34 -- scripts/common.sh@354 -- # echo 2 00:06:12.756 06:46:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:12.756 06:46:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:12.756 06:46:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:12.756 06:46:34 -- scripts/common.sh@367 -- # return 0 00:06:12.756 06:46:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.756 06:46:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.756 --rc genhtml_branch_coverage=1 00:06:12.756 --rc genhtml_function_coverage=1 00:06:12.756 --rc genhtml_legend=1 00:06:12.756 --rc geninfo_all_blocks=1 00:06:12.756 --rc geninfo_unexecuted_blocks=1 00:06:12.756 00:06:12.756 ' 00:06:12.756 06:46:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.756 --rc genhtml_branch_coverage=1 00:06:12.756 --rc genhtml_function_coverage=1 00:06:12.756 --rc genhtml_legend=1 00:06:12.756 --rc geninfo_all_blocks=1 00:06:12.756 --rc geninfo_unexecuted_blocks=1 00:06:12.756 00:06:12.756 ' 00:06:12.756 06:46:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.756 --rc genhtml_branch_coverage=1 00:06:12.756 --rc genhtml_function_coverage=1 00:06:12.756 --rc genhtml_legend=1 00:06:12.756 --rc geninfo_all_blocks=1 00:06:12.756 --rc geninfo_unexecuted_blocks=1 00:06:12.756 00:06:12.756 ' 00:06:12.756 06:46:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.756 --rc genhtml_branch_coverage=1 00:06:12.756 --rc genhtml_function_coverage=1 00:06:12.756 --rc genhtml_legend=1 00:06:12.756 --rc geninfo_all_blocks=1 00:06:12.756 --rc geninfo_unexecuted_blocks=1 00:06:12.756 00:06:12.756 ' 00:06:12.756 06:46:34 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.756 06:46:34 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1184170 00:06:12.756 06:46:34 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.756 06:46:34 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1184170 00:06:12.756 06:46:34 -- common/autotest_common.sh@829 -- # '[' -z 1184170 ']' 00:06:12.756 06:46:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.756 06:46:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.756 06:46:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.756 06:46:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.756 06:46:34 -- common/autotest_common.sh@10 -- # set +x 00:06:12.756 [2024-12-15 06:46:34.265119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:12.756 [2024-12-15 06:46:34.265179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184170 ] 00:06:12.756 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.756 [2024-12-15 06:46:34.347063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.756 [2024-12-15 06:46:34.384022] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.756 [2024-12-15 06:46:34.384141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.697 06:46:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.697 06:46:35 -- common/autotest_common.sh@862 -- # return 0 00:06:13.697 06:46:35 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:13.697 06:46:35 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1184170 00:06:13.697 06:46:35 -- common/autotest_common.sh@936 -- # '[' -z 1184170 ']' 00:06:13.697 06:46:35 -- common/autotest_common.sh@940 -- # kill -0 1184170 00:06:13.697 06:46:35 -- common/autotest_common.sh@941 -- # uname 00:06:13.697 06:46:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.697 06:46:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1184170 00:06:13.957 06:46:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.957 06:46:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.957 06:46:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1184170' 00:06:13.957 killing process with pid 1184170 00:06:13.957 06:46:35 -- common/autotest_common.sh@955 -- # kill 1184170 00:06:13.957 06:46:35 -- common/autotest_common.sh@960 -- # wait 1184170 00:06:14.216 00:06:14.216 real 0m1.615s 00:06:14.216 user 0m1.711s 00:06:14.216 sys 0m0.497s 00:06:14.216 06:46:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.216 06:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.216 ************************************ 00:06:14.216 END TEST alias_rpc 00:06:14.216 ************************************ 00:06:14.216 06:46:35 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:06:14.216 06:46:35 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:14.216 06:46:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.216 06:46:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.216 06:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.216 ************************************ 00:06:14.216 START TEST spdkcli_tcp 00:06:14.216 ************************************ 00:06:14.216 06:46:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:14.216 * Looking for test storage... 00:06:14.216 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:14.216 06:46:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:14.216 06:46:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:14.216 06:46:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:14.476 06:46:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:14.476 06:46:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:14.476 06:46:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:14.476 06:46:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:14.476 06:46:35 -- scripts/common.sh@335 -- # IFS=.-: 00:06:14.476 06:46:35 -- scripts/common.sh@335 -- # read -ra ver1 00:06:14.476 06:46:35 -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.476 06:46:35 -- scripts/common.sh@336 -- # read -ra ver2 00:06:14.476 06:46:35 -- scripts/common.sh@337 -- # local 'op=<' 00:06:14.476 06:46:35 -- scripts/common.sh@339 -- # ver1_l=2 00:06:14.476 06:46:35 -- scripts/common.sh@340 -- # ver2_l=1 00:06:14.476 06:46:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:14.476 06:46:35 -- scripts/common.sh@343 -- # case "$op" in 00:06:14.476 06:46:35 -- scripts/common.sh@344 -- # : 1 00:06:14.476 06:46:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:14.476 06:46:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.476 06:46:35 -- scripts/common.sh@364 -- # decimal 1 00:06:14.476 06:46:35 -- scripts/common.sh@352 -- # local d=1 00:06:14.476 06:46:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.476 06:46:35 -- scripts/common.sh@354 -- # echo 1 00:06:14.476 06:46:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:14.476 06:46:35 -- scripts/common.sh@365 -- # decimal 2 00:06:14.476 06:46:35 -- scripts/common.sh@352 -- # local d=2 00:06:14.476 06:46:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.476 06:46:35 -- scripts/common.sh@354 -- # echo 2 00:06:14.476 06:46:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:14.476 06:46:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:14.476 06:46:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:14.476 06:46:35 -- scripts/common.sh@367 -- # return 0 00:06:14.476 06:46:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.476 06:46:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:14.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.476 --rc genhtml_branch_coverage=1 00:06:14.476 --rc genhtml_function_coverage=1 00:06:14.476 --rc genhtml_legend=1 00:06:14.476 --rc geninfo_all_blocks=1 00:06:14.476 --rc geninfo_unexecuted_blocks=1 00:06:14.476 00:06:14.476 ' 00:06:14.476 06:46:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:14.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.476 --rc genhtml_branch_coverage=1 00:06:14.476 --rc genhtml_function_coverage=1 00:06:14.476 --rc genhtml_legend=1 00:06:14.476 --rc geninfo_all_blocks=1 00:06:14.476 --rc geninfo_unexecuted_blocks=1 00:06:14.476 00:06:14.476 ' 00:06:14.476 06:46:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:14.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.476 --rc genhtml_branch_coverage=1 00:06:14.476 --rc genhtml_function_coverage=1 00:06:14.476 --rc genhtml_legend=1 00:06:14.476 --rc geninfo_all_blocks=1 00:06:14.476 --rc geninfo_unexecuted_blocks=1 00:06:14.476 00:06:14.476 ' 00:06:14.476 06:46:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:14.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.476 --rc genhtml_branch_coverage=1 00:06:14.476 --rc genhtml_function_coverage=1 00:06:14.476 --rc genhtml_legend=1 00:06:14.476 --rc geninfo_all_blocks=1 00:06:14.476 --rc geninfo_unexecuted_blocks=1 00:06:14.476 00:06:14.476 ' 00:06:14.476 06:46:35 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:14.476 06:46:35 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:14.476 06:46:35 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:14.476 06:46:35 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:14.476 06:46:35 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:14.476 06:46:35 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:14.476 06:46:35 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:14.476 06:46:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.476 06:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.476 06:46:35 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1184500 00:06:14.476 06:46:35 -- spdkcli/tcp.sh@27 -- # waitforlisten 1184500 00:06:14.476 06:46:35 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:14.476 06:46:35 -- common/autotest_common.sh@829 -- # '[' -z 1184500 ']' 00:06:14.476 06:46:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.476 06:46:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.476 06:46:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.476 06:46:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.476 06:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:14.476 [2024-12-15 06:46:35.942584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:14.477 [2024-12-15 06:46:35.942642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184500 ] 00:06:14.477 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.477 [2024-12-15 06:46:36.024527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.477 [2024-12-15 06:46:36.062206] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:14.477 [2024-12-15 06:46:36.062364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.477 [2024-12-15 06:46:36.062365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.415 06:46:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.415 06:46:36 -- common/autotest_common.sh@862 -- # return 0 00:06:15.415 06:46:36 -- spdkcli/tcp.sh@31 -- # socat_pid=1184635 00:06:15.415 06:46:36 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.415 06:46:36 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:15.415 [ 00:06:15.415 "bdev_malloc_delete", 00:06:15.415 "bdev_malloc_create", 00:06:15.415 "bdev_null_resize", 00:06:15.415 "bdev_null_delete", 00:06:15.415 "bdev_null_create", 00:06:15.415 "bdev_nvme_cuse_unregister", 00:06:15.415 "bdev_nvme_cuse_register", 00:06:15.415 "bdev_opal_new_user", 00:06:15.415 "bdev_opal_set_lock_state", 00:06:15.415 "bdev_opal_delete", 00:06:15.415 "bdev_opal_get_info", 00:06:15.415 "bdev_opal_create", 00:06:15.415 "bdev_nvme_opal_revert", 00:06:15.415 "bdev_nvme_opal_init", 00:06:15.415 "bdev_nvme_send_cmd", 00:06:15.415 "bdev_nvme_get_path_iostat", 00:06:15.415 "bdev_nvme_get_mdns_discovery_info", 00:06:15.415 "bdev_nvme_stop_mdns_discovery", 00:06:15.415 "bdev_nvme_start_mdns_discovery", 00:06:15.415 "bdev_nvme_set_multipath_policy", 00:06:15.415 "bdev_nvme_set_preferred_path", 00:06:15.415 "bdev_nvme_get_io_paths", 00:06:15.415 "bdev_nvme_remove_error_injection", 00:06:15.415 "bdev_nvme_add_error_injection", 00:06:15.415 "bdev_nvme_get_discovery_info", 00:06:15.415 "bdev_nvme_stop_discovery", 00:06:15.415 "bdev_nvme_start_discovery", 00:06:15.415 "bdev_nvme_get_controller_health_info", 00:06:15.415 "bdev_nvme_disable_controller", 00:06:15.415 "bdev_nvme_enable_controller", 00:06:15.415 "bdev_nvme_reset_controller", 00:06:15.415 "bdev_nvme_get_transport_statistics", 00:06:15.415 "bdev_nvme_apply_firmware", 00:06:15.415 "bdev_nvme_detach_controller", 00:06:15.415 "bdev_nvme_get_controllers", 00:06:15.415 "bdev_nvme_attach_controller", 00:06:15.415 "bdev_nvme_set_hotplug", 00:06:15.415 "bdev_nvme_set_options", 00:06:15.415 "bdev_passthru_delete", 00:06:15.415 "bdev_passthru_create", 00:06:15.415 "bdev_lvol_grow_lvstore", 00:06:15.415 "bdev_lvol_get_lvols", 00:06:15.415 "bdev_lvol_get_lvstores", 00:06:15.415 "bdev_lvol_delete", 00:06:15.415 "bdev_lvol_set_read_only", 00:06:15.415 "bdev_lvol_resize", 00:06:15.415 "bdev_lvol_decouple_parent", 00:06:15.415 "bdev_lvol_inflate", 00:06:15.415 "bdev_lvol_rename", 00:06:15.415 "bdev_lvol_clone_bdev", 00:06:15.415 "bdev_lvol_clone", 00:06:15.415 "bdev_lvol_snapshot", 00:06:15.415 "bdev_lvol_create", 00:06:15.415 "bdev_lvol_delete_lvstore", 00:06:15.415 "bdev_lvol_rename_lvstore", 00:06:15.415 "bdev_lvol_create_lvstore", 00:06:15.415 "bdev_raid_set_options", 00:06:15.415 "bdev_raid_remove_base_bdev", 00:06:15.415 "bdev_raid_add_base_bdev", 00:06:15.415 "bdev_raid_delete", 00:06:15.415 "bdev_raid_create", 00:06:15.415 "bdev_raid_get_bdevs", 00:06:15.415 "bdev_error_inject_error", 00:06:15.415 "bdev_error_delete", 00:06:15.415 "bdev_error_create", 00:06:15.415 "bdev_split_delete", 00:06:15.415 "bdev_split_create", 00:06:15.415 "bdev_delay_delete", 00:06:15.415 "bdev_delay_create", 00:06:15.415 "bdev_delay_update_latency", 00:06:15.415 "bdev_zone_block_delete", 00:06:15.415 "bdev_zone_block_create", 00:06:15.415 "blobfs_create", 00:06:15.415 "blobfs_detect", 00:06:15.415 "blobfs_set_cache_size", 00:06:15.415 "bdev_aio_delete", 00:06:15.415 "bdev_aio_rescan", 00:06:15.415 "bdev_aio_create", 00:06:15.415 "bdev_ftl_set_property", 00:06:15.415 "bdev_ftl_get_properties", 00:06:15.415 "bdev_ftl_get_stats", 00:06:15.415 "bdev_ftl_unmap", 00:06:15.415 "bdev_ftl_unload", 00:06:15.415 "bdev_ftl_delete", 00:06:15.415 "bdev_ftl_load", 00:06:15.415 "bdev_ftl_create", 00:06:15.415 "bdev_virtio_attach_controller", 00:06:15.415 "bdev_virtio_scsi_get_devices", 00:06:15.415 "bdev_virtio_detach_controller", 00:06:15.415 "bdev_virtio_blk_set_hotplug", 00:06:15.415 "bdev_iscsi_delete", 00:06:15.415 "bdev_iscsi_create", 00:06:15.415 "bdev_iscsi_set_options", 00:06:15.415 "accel_error_inject_error", 00:06:15.415 "ioat_scan_accel_module", 00:06:15.415 "dsa_scan_accel_module", 00:06:15.415 "iaa_scan_accel_module", 00:06:15.415 "iscsi_set_options", 00:06:15.415 "iscsi_get_auth_groups", 00:06:15.415 "iscsi_auth_group_remove_secret", 00:06:15.415 "iscsi_auth_group_add_secret", 00:06:15.415 "iscsi_delete_auth_group", 00:06:15.415 "iscsi_create_auth_group", 00:06:15.415 "iscsi_set_discovery_auth", 00:06:15.415 "iscsi_get_options", 00:06:15.415 "iscsi_target_node_request_logout", 00:06:15.415 "iscsi_target_node_set_redirect", 00:06:15.415 "iscsi_target_node_set_auth", 00:06:15.415 "iscsi_target_node_add_lun", 00:06:15.415 "iscsi_get_connections", 00:06:15.415 "iscsi_portal_group_set_auth", 00:06:15.415 "iscsi_start_portal_group", 00:06:15.415 "iscsi_delete_portal_group", 00:06:15.415 "iscsi_create_portal_group", 00:06:15.415 "iscsi_get_portal_groups", 00:06:15.415 "iscsi_delete_target_node", 00:06:15.415 "iscsi_target_node_remove_pg_ig_maps", 00:06:15.415 "iscsi_target_node_add_pg_ig_maps", 00:06:15.415 "iscsi_create_target_node", 00:06:15.415 "iscsi_get_target_nodes", 00:06:15.415 "iscsi_delete_initiator_group", 00:06:15.415 "iscsi_initiator_group_remove_initiators", 00:06:15.415 "iscsi_initiator_group_add_initiators", 00:06:15.415 "iscsi_create_initiator_group", 00:06:15.415 "iscsi_get_initiator_groups", 00:06:15.415 "nvmf_set_crdt", 00:06:15.415 "nvmf_set_config", 00:06:15.415 "nvmf_set_max_subsystems", 00:06:15.415 "nvmf_subsystem_get_listeners", 00:06:15.415 "nvmf_subsystem_get_qpairs", 00:06:15.415 "nvmf_subsystem_get_controllers", 00:06:15.415 "nvmf_get_stats", 00:06:15.415 "nvmf_get_transports", 00:06:15.415 "nvmf_create_transport", 00:06:15.415 "nvmf_get_targets", 00:06:15.415 "nvmf_delete_target", 00:06:15.415 "nvmf_create_target", 00:06:15.415 "nvmf_subsystem_allow_any_host", 00:06:15.415 "nvmf_subsystem_remove_host", 00:06:15.415 "nvmf_subsystem_add_host", 00:06:15.415 "nvmf_subsystem_remove_ns", 00:06:15.415 "nvmf_subsystem_add_ns", 00:06:15.415 "nvmf_subsystem_listener_set_ana_state", 00:06:15.415 "nvmf_discovery_get_referrals", 00:06:15.415 "nvmf_discovery_remove_referral", 00:06:15.415 "nvmf_discovery_add_referral", 00:06:15.415 "nvmf_subsystem_remove_listener", 00:06:15.415 "nvmf_subsystem_add_listener", 00:06:15.415 "nvmf_delete_subsystem", 00:06:15.415 "nvmf_create_subsystem", 00:06:15.415 "nvmf_get_subsystems", 00:06:15.415 "env_dpdk_get_mem_stats", 00:06:15.415 "nbd_get_disks", 00:06:15.415 "nbd_stop_disk", 00:06:15.415 "nbd_start_disk", 00:06:15.415 "ublk_recover_disk", 00:06:15.416 "ublk_get_disks", 00:06:15.416 "ublk_stop_disk", 00:06:15.416 "ublk_start_disk", 00:06:15.416 "ublk_destroy_target", 00:06:15.416 "ublk_create_target", 00:06:15.416 "virtio_blk_create_transport", 00:06:15.416 "virtio_blk_get_transports", 00:06:15.416 "vhost_controller_set_coalescing", 00:06:15.416 "vhost_get_controllers", 00:06:15.416 "vhost_delete_controller", 00:06:15.416 "vhost_create_blk_controller", 00:06:15.416 "vhost_scsi_controller_remove_target", 00:06:15.416 "vhost_scsi_controller_add_target", 00:06:15.416 "vhost_start_scsi_controller", 00:06:15.416 "vhost_create_scsi_controller", 00:06:15.416 "thread_set_cpumask", 00:06:15.416 "framework_get_scheduler", 00:06:15.416 "framework_set_scheduler", 00:06:15.416 "framework_get_reactors", 00:06:15.416 "thread_get_io_channels", 00:06:15.416 "thread_get_pollers", 00:06:15.416 "thread_get_stats", 00:06:15.416 "framework_monitor_context_switch", 00:06:15.416 "spdk_kill_instance", 00:06:15.416 "log_enable_timestamps", 00:06:15.416 "log_get_flags", 00:06:15.416 "log_clear_flag", 00:06:15.416 "log_set_flag", 00:06:15.416 "log_get_level", 00:06:15.416 "log_set_level", 00:06:15.416 "log_get_print_level", 00:06:15.416 "log_set_print_level", 00:06:15.416 "framework_enable_cpumask_locks", 00:06:15.416 "framework_disable_cpumask_locks", 00:06:15.416 "framework_wait_init", 00:06:15.416 "framework_start_init", 00:06:15.416 "scsi_get_devices", 00:06:15.416 "bdev_get_histogram", 00:06:15.416 "bdev_enable_histogram", 00:06:15.416 "bdev_set_qos_limit", 00:06:15.416 "bdev_set_qd_sampling_period", 00:06:15.416 "bdev_get_bdevs", 00:06:15.416 "bdev_reset_iostat", 00:06:15.416 "bdev_get_iostat", 00:06:15.416 "bdev_examine", 00:06:15.416 "bdev_wait_for_examine", 00:06:15.416 "bdev_set_options", 00:06:15.416 "notify_get_notifications", 00:06:15.416 "notify_get_types", 00:06:15.416 "accel_get_stats", 00:06:15.416 "accel_set_options", 00:06:15.416 "accel_set_driver", 00:06:15.416 "accel_crypto_key_destroy", 00:06:15.416 "accel_crypto_keys_get", 00:06:15.416 "accel_crypto_key_create", 00:06:15.416 "accel_assign_opc", 00:06:15.416 "accel_get_module_info", 00:06:15.416 "accel_get_opc_assignments", 00:06:15.416 "vmd_rescan", 00:06:15.416 "vmd_remove_device", 00:06:15.416 "vmd_enable", 00:06:15.416 "sock_set_default_impl", 00:06:15.416 "sock_impl_set_options", 00:06:15.416 "sock_impl_get_options", 00:06:15.416 "iobuf_get_stats", 00:06:15.416 "iobuf_set_options", 00:06:15.416 "framework_get_pci_devices", 00:06:15.416 "framework_get_config", 00:06:15.416 "framework_get_subsystems", 00:06:15.416 "trace_get_info", 00:06:15.416 "trace_get_tpoint_group_mask", 00:06:15.416 "trace_disable_tpoint_group", 00:06:15.416 "trace_enable_tpoint_group", 00:06:15.416 "trace_clear_tpoint_mask", 00:06:15.416 "trace_set_tpoint_mask", 00:06:15.416 "spdk_get_version", 00:06:15.416 "rpc_get_methods" 00:06:15.416 ] 00:06:15.416 06:46:36 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:15.416 06:46:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.416 06:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:15.416 06:46:36 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:15.416 06:46:36 -- spdkcli/tcp.sh@38 -- # killprocess 1184500 00:06:15.416 06:46:36 -- common/autotest_common.sh@936 -- # '[' -z 1184500 ']' 00:06:15.416 06:46:36 -- common/autotest_common.sh@940 -- # kill -0 1184500 00:06:15.416 06:46:36 -- common/autotest_common.sh@941 -- # uname 00:06:15.416 06:46:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.416 06:46:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1184500 00:06:15.675 06:46:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.675 06:46:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.675 06:46:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1184500' 00:06:15.675 killing process with pid 1184500 00:06:15.675 06:46:37 -- common/autotest_common.sh@955 -- # kill 1184500 00:06:15.675 06:46:37 -- common/autotest_common.sh@960 -- # wait 1184500 00:06:15.934 00:06:15.934 real 0m1.661s 00:06:15.934 user 0m3.005s 00:06:15.934 sys 0m0.539s 00:06:15.934 06:46:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.935 06:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.935 ************************************ 00:06:15.935 END TEST spdkcli_tcp 00:06:15.935 ************************************ 00:06:15.935 06:46:37 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.935 06:46:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.935 06:46:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.935 06:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.935 ************************************ 00:06:15.935 START TEST dpdk_mem_utility 00:06:15.935 ************************************ 00:06:15.935 06:46:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.935 * Looking for test storage... 00:06:15.935 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:15.935 06:46:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:15.935 06:46:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:15.935 06:46:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:16.194 06:46:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:16.194 06:46:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:16.194 06:46:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:16.194 06:46:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:16.195 06:46:37 -- scripts/common.sh@335 -- # IFS=.-: 00:06:16.195 06:46:37 -- scripts/common.sh@335 -- # read -ra ver1 00:06:16.195 06:46:37 -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.195 06:46:37 -- scripts/common.sh@336 -- # read -ra ver2 00:06:16.195 06:46:37 -- scripts/common.sh@337 -- # local 'op=<' 00:06:16.195 06:46:37 -- scripts/common.sh@339 -- # ver1_l=2 00:06:16.195 06:46:37 -- scripts/common.sh@340 -- # ver2_l=1 00:06:16.195 06:46:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:16.195 06:46:37 -- scripts/common.sh@343 -- # case "$op" in 00:06:16.195 06:46:37 -- scripts/common.sh@344 -- # : 1 00:06:16.195 06:46:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:16.195 06:46:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.195 06:46:37 -- scripts/common.sh@364 -- # decimal 1 00:06:16.195 06:46:37 -- scripts/common.sh@352 -- # local d=1 00:06:16.195 06:46:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.195 06:46:37 -- scripts/common.sh@354 -- # echo 1 00:06:16.195 06:46:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:16.195 06:46:37 -- scripts/common.sh@365 -- # decimal 2 00:06:16.195 06:46:37 -- scripts/common.sh@352 -- # local d=2 00:06:16.195 06:46:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.195 06:46:37 -- scripts/common.sh@354 -- # echo 2 00:06:16.195 06:46:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:16.195 06:46:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:16.195 06:46:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:16.195 06:46:37 -- scripts/common.sh@367 -- # return 0 00:06:16.195 06:46:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.195 06:46:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:16.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.195 --rc genhtml_branch_coverage=1 00:06:16.195 --rc genhtml_function_coverage=1 00:06:16.195 --rc genhtml_legend=1 00:06:16.195 --rc geninfo_all_blocks=1 00:06:16.195 --rc geninfo_unexecuted_blocks=1 00:06:16.195 00:06:16.195 ' 00:06:16.195 06:46:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:16.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.195 --rc genhtml_branch_coverage=1 00:06:16.195 --rc genhtml_function_coverage=1 00:06:16.195 --rc genhtml_legend=1 00:06:16.195 --rc geninfo_all_blocks=1 00:06:16.195 --rc geninfo_unexecuted_blocks=1 00:06:16.195 00:06:16.195 ' 00:06:16.195 06:46:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:16.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.195 --rc genhtml_branch_coverage=1 00:06:16.195 --rc genhtml_function_coverage=1 00:06:16.195 --rc genhtml_legend=1 00:06:16.195 --rc geninfo_all_blocks=1 00:06:16.195 --rc geninfo_unexecuted_blocks=1 00:06:16.195 00:06:16.195 ' 00:06:16.195 06:46:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:16.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.195 --rc genhtml_branch_coverage=1 00:06:16.195 --rc genhtml_function_coverage=1 00:06:16.195 --rc genhtml_legend=1 00:06:16.195 --rc geninfo_all_blocks=1 00:06:16.195 --rc geninfo_unexecuted_blocks=1 00:06:16.195 00:06:16.195 ' 00:06:16.195 06:46:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:16.195 06:46:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1184849 00:06:16.195 06:46:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1184849 00:06:16.195 06:46:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:16.195 06:46:37 -- common/autotest_common.sh@829 -- # '[' -z 1184849 ']' 00:06:16.195 06:46:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.195 06:46:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.195 06:46:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.195 06:46:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.195 06:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:16.195 [2024-12-15 06:46:37.645324] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:16.195 [2024-12-15 06:46:37.645376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184849 ] 00:06:16.195 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.195 [2024-12-15 06:46:37.729651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.195 [2024-12-15 06:46:37.765441] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.195 [2024-12-15 06:46:37.765562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.134 06:46:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.134 06:46:38 -- common/autotest_common.sh@862 -- # return 0 00:06:17.134 06:46:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:17.134 06:46:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:17.134 06:46:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.134 06:46:38 -- common/autotest_common.sh@10 -- # set +x 00:06:17.134 { 00:06:17.134 "filename": "/tmp/spdk_mem_dump.txt" 00:06:17.134 } 00:06:17.134 06:46:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.134 06:46:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:17.134 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:17.134 1 heaps totaling size 814.000000 MiB 00:06:17.134 size: 814.000000 MiB heap id: 0 00:06:17.134 end heaps---------- 00:06:17.134 8 mempools totaling size 598.116089 MiB 00:06:17.134 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:17.134 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:17.134 size: 84.521057 MiB name: bdev_io_1184849 00:06:17.134 size: 51.011292 MiB name: evtpool_1184849 00:06:17.134 size: 50.003479 MiB name: msgpool_1184849 00:06:17.134 size: 21.763794 MiB name: PDU_Pool 00:06:17.134 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:17.134 size: 0.026123 MiB name: Session_Pool 00:06:17.134 end mempools------- 00:06:17.134 6 memzones totaling size 4.142822 MiB 00:06:17.134 size: 1.000366 MiB name: RG_ring_0_1184849 00:06:17.134 size: 1.000366 MiB name: RG_ring_1_1184849 00:06:17.134 size: 1.000366 MiB name: RG_ring_4_1184849 00:06:17.134 size: 1.000366 MiB name: RG_ring_5_1184849 00:06:17.134 size: 0.125366 MiB name: RG_ring_2_1184849 00:06:17.134 size: 0.015991 MiB name: RG_ring_3_1184849 00:06:17.134 end memzones------- 00:06:17.134 06:46:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:17.134 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:17.134 list of free elements. size: 12.519348 MiB 00:06:17.134 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:17.134 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:17.134 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:17.134 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:17.134 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:17.134 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:17.134 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:17.134 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:17.134 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:17.134 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:17.134 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:17.134 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:17.134 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:17.134 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:17.134 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:17.134 list of standard malloc elements. size: 199.218079 MiB 00:06:17.134 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:17.134 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:17.134 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:17.134 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:17.134 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:17.134 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:17.134 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:17.134 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:17.134 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:17.134 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:17.134 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:17.134 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:17.134 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:17.134 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:17.134 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:17.134 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:17.134 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:17.134 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:17.134 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:17.134 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:17.134 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:17.134 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:17.134 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:17.134 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:17.134 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:17.134 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:17.134 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:17.134 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:17.134 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:17.134 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:17.134 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:17.134 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:17.134 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:17.134 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:17.134 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:17.134 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:17.134 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:17.134 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:17.134 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:17.134 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:17.134 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:17.134 list of memzone associated elements. size: 602.262573 MiB 00:06:17.134 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:17.134 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:17.134 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:17.134 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:17.134 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:17.134 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1184849_0 00:06:17.134 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:17.134 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1184849_0 00:06:17.134 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:17.134 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1184849_0 00:06:17.134 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:17.134 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:17.134 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:17.134 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:17.134 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:17.134 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1184849 00:06:17.134 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:17.134 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1184849 00:06:17.134 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:17.134 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1184849 00:06:17.134 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:17.134 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:17.134 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:17.134 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:17.134 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:17.134 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:17.134 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:17.135 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:17.135 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:17.135 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1184849 00:06:17.135 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:17.135 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1184849 00:06:17.135 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:17.135 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1184849 00:06:17.135 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:17.135 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1184849 00:06:17.135 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:17.135 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1184849 00:06:17.135 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:17.135 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:17.135 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:17.135 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:17.135 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:17.135 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:17.135 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:17.135 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1184849 00:06:17.135 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:17.135 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:17.135 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:17.135 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:17.135 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:17.135 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1184849 00:06:17.135 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:17.135 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:17.135 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:17.135 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1184849 00:06:17.135 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:17.135 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1184849 00:06:17.135 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:17.135 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:17.135 06:46:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:17.135 06:46:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1184849 00:06:17.135 06:46:38 -- common/autotest_common.sh@936 -- # '[' -z 1184849 ']' 00:06:17.135 06:46:38 -- common/autotest_common.sh@940 -- # kill -0 1184849 00:06:17.135 06:46:38 -- common/autotest_common.sh@941 -- # uname 00:06:17.135 06:46:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:17.135 06:46:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1184849 00:06:17.135 06:46:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:17.135 06:46:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:17.135 06:46:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1184849' 00:06:17.135 killing process with pid 1184849 00:06:17.135 06:46:38 -- common/autotest_common.sh@955 -- # kill 1184849 00:06:17.135 06:46:38 -- common/autotest_common.sh@960 -- # wait 1184849 00:06:17.394 00:06:17.394 real 0m1.528s 00:06:17.395 user 0m1.587s 00:06:17.395 sys 0m0.462s 00:06:17.395 06:46:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.395 06:46:38 -- common/autotest_common.sh@10 -- # set +x 00:06:17.395 ************************************ 00:06:17.395 END TEST dpdk_mem_utility 00:06:17.395 ************************************ 00:06:17.395 06:46:38 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:17.395 06:46:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.395 06:46:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.395 06:46:38 -- common/autotest_common.sh@10 -- # set +x 00:06:17.395 ************************************ 00:06:17.395 START TEST event 00:06:17.395 ************************************ 00:06:17.395 06:46:38 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:17.654 * Looking for test storage... 00:06:17.654 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:17.654 06:46:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:17.654 06:46:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:17.654 06:46:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:17.654 06:46:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:17.654 06:46:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:17.654 06:46:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:17.654 06:46:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:17.654 06:46:39 -- scripts/common.sh@335 -- # IFS=.-: 00:06:17.654 06:46:39 -- scripts/common.sh@335 -- # read -ra ver1 00:06:17.654 06:46:39 -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.654 06:46:39 -- scripts/common.sh@336 -- # read -ra ver2 00:06:17.654 06:46:39 -- scripts/common.sh@337 -- # local 'op=<' 00:06:17.654 06:46:39 -- scripts/common.sh@339 -- # ver1_l=2 00:06:17.654 06:46:39 -- scripts/common.sh@340 -- # ver2_l=1 00:06:17.654 06:46:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:17.654 06:46:39 -- scripts/common.sh@343 -- # case "$op" in 00:06:17.654 06:46:39 -- scripts/common.sh@344 -- # : 1 00:06:17.654 06:46:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:17.654 06:46:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.654 06:46:39 -- scripts/common.sh@364 -- # decimal 1 00:06:17.654 06:46:39 -- scripts/common.sh@352 -- # local d=1 00:06:17.654 06:46:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.654 06:46:39 -- scripts/common.sh@354 -- # echo 1 00:06:17.654 06:46:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:17.654 06:46:39 -- scripts/common.sh@365 -- # decimal 2 00:06:17.654 06:46:39 -- scripts/common.sh@352 -- # local d=2 00:06:17.654 06:46:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.654 06:46:39 -- scripts/common.sh@354 -- # echo 2 00:06:17.654 06:46:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:17.654 06:46:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:17.654 06:46:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:17.654 06:46:39 -- scripts/common.sh@367 -- # return 0 00:06:17.654 06:46:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.654 06:46:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:17.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.654 --rc genhtml_branch_coverage=1 00:06:17.654 --rc genhtml_function_coverage=1 00:06:17.654 --rc genhtml_legend=1 00:06:17.654 --rc geninfo_all_blocks=1 00:06:17.654 --rc geninfo_unexecuted_blocks=1 00:06:17.654 00:06:17.654 ' 00:06:17.654 06:46:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:17.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.654 --rc genhtml_branch_coverage=1 00:06:17.654 --rc genhtml_function_coverage=1 00:06:17.654 --rc genhtml_legend=1 00:06:17.654 --rc geninfo_all_blocks=1 00:06:17.654 --rc geninfo_unexecuted_blocks=1 00:06:17.654 00:06:17.654 ' 00:06:17.654 06:46:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:17.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.654 --rc genhtml_branch_coverage=1 00:06:17.654 --rc genhtml_function_coverage=1 00:06:17.654 --rc genhtml_legend=1 00:06:17.654 --rc geninfo_all_blocks=1 00:06:17.654 --rc geninfo_unexecuted_blocks=1 00:06:17.654 00:06:17.654 ' 00:06:17.654 06:46:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:17.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.655 --rc genhtml_branch_coverage=1 00:06:17.655 --rc genhtml_function_coverage=1 00:06:17.655 --rc genhtml_legend=1 00:06:17.655 --rc geninfo_all_blocks=1 00:06:17.655 --rc geninfo_unexecuted_blocks=1 00:06:17.655 00:06:17.655 ' 00:06:17.655 06:46:39 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:17.655 06:46:39 -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.655 06:46:39 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.655 06:46:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:17.655 06:46:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.655 06:46:39 -- common/autotest_common.sh@10 -- # set +x 00:06:17.655 ************************************ 00:06:17.655 START TEST event_perf 00:06:17.655 ************************************ 00:06:17.655 06:46:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.655 Running I/O for 1 seconds...[2024-12-15 06:46:39.204523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:17.655 [2024-12-15 06:46:39.204612] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185189 ] 00:06:17.655 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.655 [2024-12-15 06:46:39.291807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.914 [2024-12-15 06:46:39.330378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.914 [2024-12-15 06:46:39.330489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.914 [2024-12-15 06:46:39.330596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.914 [2024-12-15 06:46:39.330597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.947 Running I/O for 1 seconds... 00:06:18.947 lcore 0: 209893 00:06:18.947 lcore 1: 209892 00:06:18.947 lcore 2: 209891 00:06:18.947 lcore 3: 209893 00:06:18.947 done. 00:06:18.947 00:06:18.947 real 0m1.210s 00:06:18.947 user 0m4.105s 00:06:18.947 sys 0m0.100s 00:06:18.947 06:46:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.947 06:46:40 -- common/autotest_common.sh@10 -- # set +x 00:06:18.948 ************************************ 00:06:18.948 END TEST event_perf 00:06:18.948 ************************************ 00:06:18.948 06:46:40 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:18.948 06:46:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:18.948 06:46:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.948 06:46:40 -- common/autotest_common.sh@10 -- # set +x 00:06:18.948 ************************************ 00:06:18.948 START TEST event_reactor 00:06:18.948 ************************************ 00:06:18.948 06:46:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:18.948 [2024-12-15 06:46:40.466489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:18.948 [2024-12-15 06:46:40.466580] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185477 ] 00:06:18.948 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.948 [2024-12-15 06:46:40.557445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.206 [2024-12-15 06:46:40.593270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.144 test_start 00:06:20.144 oneshot 00:06:20.144 tick 100 00:06:20.144 tick 100 00:06:20.144 tick 250 00:06:20.144 tick 100 00:06:20.144 tick 100 00:06:20.144 tick 100 00:06:20.144 tick 250 00:06:20.144 tick 500 00:06:20.144 tick 100 00:06:20.144 tick 100 00:06:20.144 tick 250 00:06:20.144 tick 100 00:06:20.144 tick 100 00:06:20.144 test_end 00:06:20.144 00:06:20.144 real 0m1.204s 00:06:20.144 user 0m1.104s 00:06:20.144 sys 0m0.095s 00:06:20.144 06:46:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.144 06:46:41 -- common/autotest_common.sh@10 -- # set +x 00:06:20.144 ************************************ 00:06:20.144 END TEST event_reactor 00:06:20.144 ************************************ 00:06:20.144 06:46:41 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.144 06:46:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:20.144 06:46:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.144 06:46:41 -- common/autotest_common.sh@10 -- # set +x 00:06:20.144 ************************************ 00:06:20.144 START TEST event_reactor_perf 00:06:20.144 ************************************ 00:06:20.144 06:46:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.144 [2024-12-15 06:46:41.721304] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:20.144 [2024-12-15 06:46:41.721395] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185759 ] 00:06:20.144 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.404 [2024-12-15 06:46:41.809382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.404 [2024-12-15 06:46:41.844110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.342 test_start 00:06:21.342 test_end 00:06:21.342 Performance: 523201 events per second 00:06:21.342 00:06:21.342 real 0m1.201s 00:06:21.342 user 0m1.104s 00:06:21.342 sys 0m0.093s 00:06:21.342 06:46:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.342 06:46:42 -- common/autotest_common.sh@10 -- # set +x 00:06:21.342 ************************************ 00:06:21.342 END TEST event_reactor_perf 00:06:21.342 ************************************ 00:06:21.342 06:46:42 -- event/event.sh@49 -- # uname -s 00:06:21.342 06:46:42 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:21.342 06:46:42 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:21.342 06:46:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.342 06:46:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.342 06:46:42 -- common/autotest_common.sh@10 -- # set +x 00:06:21.342 ************************************ 00:06:21.342 START TEST event_scheduler 00:06:21.342 ************************************ 00:06:21.342 06:46:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:21.601 * Looking for test storage... 00:06:21.601 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:21.601 06:46:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:21.602 06:46:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:21.602 06:46:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:21.602 06:46:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:21.602 06:46:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:21.602 06:46:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:21.602 06:46:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:21.602 06:46:43 -- scripts/common.sh@335 -- # IFS=.-: 00:06:21.602 06:46:43 -- scripts/common.sh@335 -- # read -ra ver1 00:06:21.602 06:46:43 -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.602 06:46:43 -- scripts/common.sh@336 -- # read -ra ver2 00:06:21.602 06:46:43 -- scripts/common.sh@337 -- # local 'op=<' 00:06:21.602 06:46:43 -- scripts/common.sh@339 -- # ver1_l=2 00:06:21.602 06:46:43 -- scripts/common.sh@340 -- # ver2_l=1 00:06:21.602 06:46:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:21.602 06:46:43 -- scripts/common.sh@343 -- # case "$op" in 00:06:21.602 06:46:43 -- scripts/common.sh@344 -- # : 1 00:06:21.602 06:46:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:21.602 06:46:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.602 06:46:43 -- scripts/common.sh@364 -- # decimal 1 00:06:21.602 06:46:43 -- scripts/common.sh@352 -- # local d=1 00:06:21.602 06:46:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.602 06:46:43 -- scripts/common.sh@354 -- # echo 1 00:06:21.602 06:46:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:21.602 06:46:43 -- scripts/common.sh@365 -- # decimal 2 00:06:21.602 06:46:43 -- scripts/common.sh@352 -- # local d=2 00:06:21.602 06:46:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.602 06:46:43 -- scripts/common.sh@354 -- # echo 2 00:06:21.602 06:46:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:21.602 06:46:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:21.602 06:46:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:21.602 06:46:43 -- scripts/common.sh@367 -- # return 0 00:06:21.602 06:46:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.602 06:46:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:21.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.602 --rc genhtml_branch_coverage=1 00:06:21.602 --rc genhtml_function_coverage=1 00:06:21.602 --rc genhtml_legend=1 00:06:21.602 --rc geninfo_all_blocks=1 00:06:21.602 --rc geninfo_unexecuted_blocks=1 00:06:21.602 00:06:21.602 ' 00:06:21.602 06:46:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:21.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.602 --rc genhtml_branch_coverage=1 00:06:21.602 --rc genhtml_function_coverage=1 00:06:21.602 --rc genhtml_legend=1 00:06:21.602 --rc geninfo_all_blocks=1 00:06:21.602 --rc geninfo_unexecuted_blocks=1 00:06:21.602 00:06:21.602 ' 00:06:21.602 06:46:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:21.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.602 --rc genhtml_branch_coverage=1 00:06:21.602 --rc genhtml_function_coverage=1 00:06:21.602 --rc genhtml_legend=1 00:06:21.602 --rc geninfo_all_blocks=1 00:06:21.602 --rc geninfo_unexecuted_blocks=1 00:06:21.602 00:06:21.602 ' 00:06:21.602 06:46:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:21.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.602 --rc genhtml_branch_coverage=1 00:06:21.602 --rc genhtml_function_coverage=1 00:06:21.602 --rc genhtml_legend=1 00:06:21.602 --rc geninfo_all_blocks=1 00:06:21.602 --rc geninfo_unexecuted_blocks=1 00:06:21.602 00:06:21.602 ' 00:06:21.602 06:46:43 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:21.602 06:46:43 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1186027 00:06:21.602 06:46:43 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.602 06:46:43 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:21.602 06:46:43 -- scheduler/scheduler.sh@37 -- # waitforlisten 1186027 00:06:21.602 06:46:43 -- common/autotest_common.sh@829 -- # '[' -z 1186027 ']' 00:06:21.602 06:46:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.602 06:46:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.602 06:46:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.602 06:46:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.602 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:21.602 [2024-12-15 06:46:43.183017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:21.602 [2024-12-15 06:46:43.183076] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186027 ] 00:06:21.602 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.862 [2024-12-15 06:46:43.252463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.862 [2024-12-15 06:46:43.290880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.862 [2024-12-15 06:46:43.290988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.862 [2024-12-15 06:46:43.291089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.862 [2024-12-15 06:46:43.291090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.862 06:46:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.862 06:46:43 -- common/autotest_common.sh@862 -- # return 0 00:06:21.862 06:46:43 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:21.862 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.862 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:21.862 POWER: Env isn't set yet! 00:06:21.862 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:21.862 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.862 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.862 POWER: Attempting to initialise PSTAT power management... 00:06:21.862 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:21.862 POWER: Initialized successfully for lcore 0 power management 00:06:21.862 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:21.862 POWER: Initialized successfully for lcore 1 power management 00:06:21.862 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:21.862 POWER: Initialized successfully for lcore 2 power management 00:06:21.862 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:21.862 POWER: Initialized successfully for lcore 3 power management 00:06:21.862 [2024-12-15 06:46:43.406237] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:21.862 [2024-12-15 06:46:43.406254] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:21.862 [2024-12-15 06:46:43.406263] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:21.862 06:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.862 06:46:43 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:21.862 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.862 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:21.862 [2024-12-15 06:46:43.469701] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:21.862 06:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.862 06:46:43 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:21.862 06:46:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.862 06:46:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.862 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:21.862 ************************************ 00:06:21.862 START TEST scheduler_create_thread 00:06:21.862 ************************************ 00:06:21.862 06:46:43 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:21.862 06:46:43 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:21.862 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.862 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:21.862 2 00:06:21.862 06:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.862 06:46:43 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:21.862 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.862 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:22.121 3 00:06:22.121 06:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.121 06:46:43 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:22.121 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.121 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:22.121 4 00:06:22.121 06:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.121 06:46:43 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:22.121 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.121 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:22.121 5 00:06:22.121 06:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.121 06:46:43 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:22.121 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.121 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:22.121 6 00:06:22.121 06:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.121 06:46:43 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:22.121 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.121 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:22.121 7 00:06:22.121 06:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.121 06:46:43 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:22.121 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.121 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:22.121 8 00:06:22.121 06:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.121 06:46:43 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:22.121 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.121 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:22.121 9 00:06:22.121 06:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.121 06:46:43 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:22.121 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.121 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:22.121 10 00:06:22.121 06:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.121 06:46:43 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:22.121 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.121 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:22.121 06:46:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.121 06:46:43 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:22.121 06:46:43 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:22.121 06:46:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.121 06:46:43 -- common/autotest_common.sh@10 -- # set +x 00:06:23.058 06:46:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.058 06:46:44 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:23.058 06:46:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.058 06:46:44 -- common/autotest_common.sh@10 -- # set +x 00:06:24.435 06:46:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.435 06:46:45 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:24.435 06:46:45 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:24.435 06:46:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.436 06:46:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.371 06:46:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.371 00:06:25.371 real 0m3.382s 00:06:25.371 user 0m0.022s 00:06:25.371 sys 0m0.008s 00:06:25.371 06:46:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.371 06:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:25.371 ************************************ 00:06:25.371 END TEST scheduler_create_thread 00:06:25.371 ************************************ 00:06:25.371 06:46:46 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:25.371 06:46:46 -- scheduler/scheduler.sh@46 -- # killprocess 1186027 00:06:25.371 06:46:46 -- common/autotest_common.sh@936 -- # '[' -z 1186027 ']' 00:06:25.371 06:46:46 -- common/autotest_common.sh@940 -- # kill -0 1186027 00:06:25.371 06:46:46 -- common/autotest_common.sh@941 -- # uname 00:06:25.371 06:46:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.371 06:46:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1186027 00:06:25.371 06:46:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:25.371 06:46:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:25.371 06:46:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1186027' 00:06:25.371 killing process with pid 1186027 00:06:25.371 06:46:46 -- common/autotest_common.sh@955 -- # kill 1186027 00:06:25.371 06:46:46 -- common/autotest_common.sh@960 -- # wait 1186027 00:06:25.630 [2024-12-15 06:46:47.241481] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:25.889 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:25.889 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:25.889 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:25.889 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:25.889 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:25.889 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:25.889 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:25.889 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:25.889 00:06:25.889 real 0m4.504s 00:06:25.889 user 0m7.953s 00:06:25.889 sys 0m0.386s 00:06:25.889 06:46:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.889 06:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:25.889 ************************************ 00:06:25.889 END TEST event_scheduler 00:06:25.889 ************************************ 00:06:25.889 06:46:47 -- event/event.sh@51 -- # modprobe -n nbd 00:06:25.889 06:46:47 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:25.889 06:46:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.889 06:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.889 06:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:25.889 ************************************ 00:06:25.889 START TEST app_repeat 00:06:25.889 ************************************ 00:06:25.889 06:46:47 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:25.889 06:46:47 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.889 06:46:47 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.889 06:46:47 -- event/event.sh@13 -- # local nbd_list 00:06:25.889 06:46:47 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.889 06:46:47 -- event/event.sh@14 -- # local bdev_list 00:06:25.889 06:46:47 -- event/event.sh@15 -- # local repeat_times=4 00:06:25.889 06:46:47 -- event/event.sh@17 -- # modprobe nbd 00:06:25.889 06:46:47 -- event/event.sh@19 -- # repeat_pid=1186755 00:06:25.889 06:46:47 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.889 06:46:47 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:25.889 06:46:47 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1186755' 00:06:25.889 Process app_repeat pid: 1186755 00:06:26.149 06:46:47 -- event/event.sh@23 -- # for i in {0..2} 00:06:26.149 06:46:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:26.149 spdk_app_start Round 0 00:06:26.149 06:46:47 -- event/event.sh@25 -- # waitforlisten 1186755 /var/tmp/spdk-nbd.sock 00:06:26.149 06:46:47 -- common/autotest_common.sh@829 -- # '[' -z 1186755 ']' 00:06:26.149 06:46:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.149 06:46:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.149 06:46:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.149 06:46:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.149 06:46:47 -- common/autotest_common.sh@10 -- # set +x 00:06:26.149 [2024-12-15 06:46:47.554279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:26.149 [2024-12-15 06:46:47.554344] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186755 ] 00:06:26.149 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.149 [2024-12-15 06:46:47.625000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.149 [2024-12-15 06:46:47.663231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.149 [2024-12-15 06:46:47.663233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.086 06:46:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.086 06:46:48 -- common/autotest_common.sh@862 -- # return 0 00:06:27.086 06:46:48 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.086 Malloc0 00:06:27.086 06:46:48 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.347 Malloc1 00:06:27.347 06:46:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.347 06:46:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.347 06:46:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.347 06:46:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@12 -- # local i 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.348 /dev/nbd0 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.348 06:46:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.348 06:46:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:27.348 06:46:48 -- common/autotest_common.sh@867 -- # local i 00:06:27.348 06:46:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:27.348 06:46:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:27.348 06:46:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:27.348 06:46:48 -- common/autotest_common.sh@871 -- # break 00:06:27.348 06:46:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:27.348 06:46:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:27.348 06:46:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.348 1+0 records in 00:06:27.348 1+0 records out 00:06:27.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021348 s, 19.2 MB/s 00:06:27.348 06:46:48 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:27.607 06:46:48 -- common/autotest_common.sh@884 -- # size=4096 00:06:27.607 06:46:48 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:27.607 06:46:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:27.607 06:46:48 -- common/autotest_common.sh@887 -- # return 0 00:06:27.607 06:46:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.607 06:46:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.607 06:46:48 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.607 /dev/nbd1 00:06:27.607 06:46:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.607 06:46:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.607 06:46:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:27.607 06:46:49 -- common/autotest_common.sh@867 -- # local i 00:06:27.607 06:46:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:27.607 06:46:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:27.607 06:46:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:27.607 06:46:49 -- common/autotest_common.sh@871 -- # break 00:06:27.607 06:46:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:27.607 06:46:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:27.607 06:46:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.607 1+0 records in 00:06:27.607 1+0 records out 00:06:27.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225095 s, 18.2 MB/s 00:06:27.607 06:46:49 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:27.607 06:46:49 -- common/autotest_common.sh@884 -- # size=4096 00:06:27.607 06:46:49 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:27.607 06:46:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:27.607 06:46:49 -- common/autotest_common.sh@887 -- # return 0 00:06:27.607 06:46:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.607 06:46:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.607 06:46:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.607 06:46:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.607 06:46:49 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.866 06:46:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.866 { 00:06:27.866 "nbd_device": "/dev/nbd0", 00:06:27.866 "bdev_name": "Malloc0" 00:06:27.866 }, 00:06:27.866 { 00:06:27.866 "nbd_device": "/dev/nbd1", 00:06:27.866 "bdev_name": "Malloc1" 00:06:27.866 } 00:06:27.866 ]' 00:06:27.866 06:46:49 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.866 { 00:06:27.866 "nbd_device": "/dev/nbd0", 00:06:27.866 "bdev_name": "Malloc0" 00:06:27.866 }, 00:06:27.866 { 00:06:27.866 "nbd_device": "/dev/nbd1", 00:06:27.866 "bdev_name": "Malloc1" 00:06:27.866 } 00:06:27.866 ]' 00:06:27.866 06:46:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.867 /dev/nbd1' 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.867 /dev/nbd1' 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.867 256+0 records in 00:06:27.867 256+0 records out 00:06:27.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011557 s, 90.7 MB/s 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.867 256+0 records in 00:06:27.867 256+0 records out 00:06:27.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192538 s, 54.5 MB/s 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.867 06:46:49 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.126 256+0 records in 00:06:28.126 256+0 records out 00:06:28.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203417 s, 51.5 MB/s 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@51 -- # local i 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@41 -- # break 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.126 06:46:49 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.385 06:46:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.385 06:46:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.385 06:46:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.385 06:46:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.385 06:46:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.385 06:46:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.385 06:46:49 -- bdev/nbd_common.sh@41 -- # break 00:06:28.385 06:46:49 -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.385 06:46:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.385 06:46:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.385 06:46:49 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.644 06:46:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.644 06:46:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.644 06:46:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.644 06:46:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.644 06:46:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.644 06:46:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.644 06:46:50 -- bdev/nbd_common.sh@65 -- # true 00:06:28.644 06:46:50 -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.644 06:46:50 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.644 06:46:50 -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.644 06:46:50 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.644 06:46:50 -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.644 06:46:50 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.904 06:46:50 -- event/event.sh@35 -- # sleep 3 00:06:28.904 [2024-12-15 06:46:50.530869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.163 [2024-12-15 06:46:50.565416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.163 [2024-12-15 06:46:50.565419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.163 [2024-12-15 06:46:50.606161] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.163 [2024-12-15 06:46:50.606205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.452 06:46:53 -- event/event.sh@23 -- # for i in {0..2} 00:06:32.452 06:46:53 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:32.452 spdk_app_start Round 1 00:06:32.452 06:46:53 -- event/event.sh@25 -- # waitforlisten 1186755 /var/tmp/spdk-nbd.sock 00:06:32.452 06:46:53 -- common/autotest_common.sh@829 -- # '[' -z 1186755 ']' 00:06:32.452 06:46:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.452 06:46:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.452 06:46:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.452 06:46:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.452 06:46:53 -- common/autotest_common.sh@10 -- # set +x 00:06:32.452 06:46:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.452 06:46:53 -- common/autotest_common.sh@862 -- # return 0 00:06:32.452 06:46:53 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.452 Malloc0 00:06:32.452 06:46:53 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.452 Malloc1 00:06:32.452 06:46:53 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.452 06:46:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.452 06:46:53 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.452 06:46:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.452 06:46:53 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.453 06:46:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.453 06:46:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.453 06:46:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.453 06:46:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.453 06:46:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.453 06:46:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.453 06:46:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.453 06:46:53 -- bdev/nbd_common.sh@12 -- # local i 00:06:32.453 06:46:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.453 06:46:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.453 06:46:53 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.712 /dev/nbd0 00:06:32.712 06:46:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.712 06:46:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.712 06:46:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:32.712 06:46:54 -- common/autotest_common.sh@867 -- # local i 00:06:32.712 06:46:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:32.712 06:46:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:32.712 06:46:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:32.712 06:46:54 -- common/autotest_common.sh@871 -- # break 00:06:32.712 06:46:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:32.712 06:46:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:32.712 06:46:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.712 1+0 records in 00:06:32.712 1+0 records out 00:06:32.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023536 s, 17.4 MB/s 00:06:32.712 06:46:54 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:32.712 06:46:54 -- common/autotest_common.sh@884 -- # size=4096 00:06:32.712 06:46:54 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:32.712 06:46:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.712 06:46:54 -- common/autotest_common.sh@887 -- # return 0 00:06:32.712 06:46:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.712 06:46:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.712 06:46:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.712 /dev/nbd1 00:06:32.712 06:46:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.712 06:46:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.712 06:46:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:32.712 06:46:54 -- common/autotest_common.sh@867 -- # local i 00:06:32.712 06:46:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:32.712 06:46:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:32.712 06:46:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:32.971 06:46:54 -- common/autotest_common.sh@871 -- # break 00:06:32.971 06:46:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:32.971 06:46:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:32.971 06:46:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.971 1+0 records in 00:06:32.971 1+0 records out 00:06:32.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237606 s, 17.2 MB/s 00:06:32.971 06:46:54 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:32.971 06:46:54 -- common/autotest_common.sh@884 -- # size=4096 00:06:32.971 06:46:54 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:32.971 06:46:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.971 06:46:54 -- common/autotest_common.sh@887 -- # return 0 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.971 { 00:06:32.971 "nbd_device": "/dev/nbd0", 00:06:32.971 "bdev_name": "Malloc0" 00:06:32.971 }, 00:06:32.971 { 00:06:32.971 "nbd_device": "/dev/nbd1", 00:06:32.971 "bdev_name": "Malloc1" 00:06:32.971 } 00:06:32.971 ]' 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.971 { 00:06:32.971 "nbd_device": "/dev/nbd0", 00:06:32.971 "bdev_name": "Malloc0" 00:06:32.971 }, 00:06:32.971 { 00:06:32.971 "nbd_device": "/dev/nbd1", 00:06:32.971 "bdev_name": "Malloc1" 00:06:32.971 } 00:06:32.971 ]' 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.971 /dev/nbd1' 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.971 /dev/nbd1' 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.971 06:46:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.971 256+0 records in 00:06:32.971 256+0 records out 00:06:32.971 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104433 s, 100 MB/s 00:06:32.972 06:46:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.972 06:46:54 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.231 256+0 records in 00:06:33.231 256+0 records out 00:06:33.231 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197981 s, 53.0 MB/s 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.231 256+0 records in 00:06:33.231 256+0 records out 00:06:33.231 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203205 s, 51.6 MB/s 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@51 -- # local i 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.231 06:46:54 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.490 06:46:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.490 06:46:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.490 06:46:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.490 06:46:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.490 06:46:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.490 06:46:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.490 06:46:54 -- bdev/nbd_common.sh@41 -- # break 00:06:33.490 06:46:54 -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.490 06:46:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.490 06:46:54 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.490 06:46:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.490 06:46:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.490 06:46:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.490 06:46:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.490 06:46:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.490 06:46:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.490 06:46:55 -- bdev/nbd_common.sh@41 -- # break 00:06:33.490 06:46:55 -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.490 06:46:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.490 06:46:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.490 06:46:55 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.749 06:46:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.749 06:46:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.749 06:46:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.749 06:46:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.749 06:46:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.749 06:46:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.749 06:46:55 -- bdev/nbd_common.sh@65 -- # true 00:06:33.749 06:46:55 -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.749 06:46:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.749 06:46:55 -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.749 06:46:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.749 06:46:55 -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.749 06:46:55 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.008 06:46:55 -- event/event.sh@35 -- # sleep 3 00:06:34.267 [2024-12-15 06:46:55.677290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.267 [2024-12-15 06:46:55.709709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.267 [2024-12-15 06:46:55.709712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.267 [2024-12-15 06:46:55.751270] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.268 [2024-12-15 06:46:55.751311] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.557 06:46:58 -- event/event.sh@23 -- # for i in {0..2} 00:06:37.557 06:46:58 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:37.557 spdk_app_start Round 2 00:06:37.557 06:46:58 -- event/event.sh@25 -- # waitforlisten 1186755 /var/tmp/spdk-nbd.sock 00:06:37.557 06:46:58 -- common/autotest_common.sh@829 -- # '[' -z 1186755 ']' 00:06:37.557 06:46:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.557 06:46:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.557 06:46:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.557 06:46:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.557 06:46:58 -- common/autotest_common.sh@10 -- # set +x 00:06:37.557 06:46:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.557 06:46:58 -- common/autotest_common.sh@862 -- # return 0 00:06:37.557 06:46:58 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.557 Malloc0 00:06:37.557 06:46:58 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.557 Malloc1 00:06:37.557 06:46:59 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@12 -- # local i 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.557 06:46:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.815 /dev/nbd0 00:06:37.815 06:46:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.815 06:46:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.815 06:46:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:37.815 06:46:59 -- common/autotest_common.sh@867 -- # local i 00:06:37.815 06:46:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.815 06:46:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.815 06:46:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:37.815 06:46:59 -- common/autotest_common.sh@871 -- # break 00:06:37.815 06:46:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.815 06:46:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.815 06:46:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.815 1+0 records in 00:06:37.815 1+0 records out 00:06:37.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231271 s, 17.7 MB/s 00:06:37.815 06:46:59 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:37.815 06:46:59 -- common/autotest_common.sh@884 -- # size=4096 00:06:37.815 06:46:59 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:37.815 06:46:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.815 06:46:59 -- common/autotest_common.sh@887 -- # return 0 00:06:37.815 06:46:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.815 06:46:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.815 06:46:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.815 /dev/nbd1 00:06:37.815 06:46:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.815 06:46:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.815 06:46:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:37.815 06:46:59 -- common/autotest_common.sh@867 -- # local i 00:06:37.815 06:46:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.815 06:46:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.815 06:46:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:38.074 06:46:59 -- common/autotest_common.sh@871 -- # break 00:06:38.074 06:46:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.074 06:46:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.074 06:46:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.074 1+0 records in 00:06:38.074 1+0 records out 00:06:38.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259352 s, 15.8 MB/s 00:06:38.074 06:46:59 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.074 06:46:59 -- common/autotest_common.sh@884 -- # size=4096 00:06:38.074 06:46:59 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.074 06:46:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.074 06:46:59 -- common/autotest_common.sh@887 -- # return 0 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.074 { 00:06:38.074 "nbd_device": "/dev/nbd0", 00:06:38.074 "bdev_name": "Malloc0" 00:06:38.074 }, 00:06:38.074 { 00:06:38.074 "nbd_device": "/dev/nbd1", 00:06:38.074 "bdev_name": "Malloc1" 00:06:38.074 } 00:06:38.074 ]' 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.074 { 00:06:38.074 "nbd_device": "/dev/nbd0", 00:06:38.074 "bdev_name": "Malloc0" 00:06:38.074 }, 00:06:38.074 { 00:06:38.074 "nbd_device": "/dev/nbd1", 00:06:38.074 "bdev_name": "Malloc1" 00:06:38.074 } 00:06:38.074 ]' 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.074 /dev/nbd1' 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.074 /dev/nbd1' 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@65 -- # count=2 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@95 -- # count=2 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.074 06:46:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.075 06:46:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.075 256+0 records in 00:06:38.075 256+0 records out 00:06:38.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115532 s, 90.8 MB/s 00:06:38.075 06:46:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.075 06:46:59 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.334 256+0 records in 00:06:38.334 256+0 records out 00:06:38.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194108 s, 54.0 MB/s 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.334 256+0 records in 00:06:38.334 256+0 records out 00:06:38.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204506 s, 51.3 MB/s 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@51 -- # local i 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.334 06:46:59 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.593 06:46:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.593 06:46:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.593 06:46:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.593 06:46:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.593 06:46:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.593 06:46:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.593 06:46:59 -- bdev/nbd_common.sh@41 -- # break 00:06:38.593 06:46:59 -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.593 06:46:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.593 06:46:59 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.593 06:47:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.593 06:47:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.593 06:47:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.593 06:47:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.593 06:47:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.593 06:47:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.593 06:47:00 -- bdev/nbd_common.sh@41 -- # break 00:06:38.593 06:47:00 -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.593 06:47:00 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.593 06:47:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.593 06:47:00 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.852 06:47:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.852 06:47:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.852 06:47:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.852 06:47:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.852 06:47:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.852 06:47:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.852 06:47:00 -- bdev/nbd_common.sh@65 -- # true 00:06:38.852 06:47:00 -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.852 06:47:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.852 06:47:00 -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.852 06:47:00 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.852 06:47:00 -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.852 06:47:00 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.111 06:47:00 -- event/event.sh@35 -- # sleep 3 00:06:39.369 [2024-12-15 06:47:00.769652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.369 [2024-12-15 06:47:00.802034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.369 [2024-12-15 06:47:00.802038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.369 [2024-12-15 06:47:00.843101] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.369 [2024-12-15 06:47:00.843144] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.657 06:47:03 -- event/event.sh@38 -- # waitforlisten 1186755 /var/tmp/spdk-nbd.sock 00:06:42.657 06:47:03 -- common/autotest_common.sh@829 -- # '[' -z 1186755 ']' 00:06:42.657 06:47:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.657 06:47:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.657 06:47:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.657 06:47:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.657 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:42.657 06:47:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.657 06:47:03 -- common/autotest_common.sh@862 -- # return 0 00:06:42.657 06:47:03 -- event/event.sh@39 -- # killprocess 1186755 00:06:42.657 06:47:03 -- common/autotest_common.sh@936 -- # '[' -z 1186755 ']' 00:06:42.657 06:47:03 -- common/autotest_common.sh@940 -- # kill -0 1186755 00:06:42.657 06:47:03 -- common/autotest_common.sh@941 -- # uname 00:06:42.657 06:47:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:42.657 06:47:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1186755 00:06:42.657 06:47:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:42.657 06:47:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:42.657 06:47:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1186755' 00:06:42.657 killing process with pid 1186755 00:06:42.657 06:47:03 -- common/autotest_common.sh@955 -- # kill 1186755 00:06:42.657 06:47:03 -- common/autotest_common.sh@960 -- # wait 1186755 00:06:42.657 spdk_app_start is called in Round 0. 00:06:42.657 Shutdown signal received, stop current app iteration 00:06:42.657 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:42.657 spdk_app_start is called in Round 1. 00:06:42.657 Shutdown signal received, stop current app iteration 00:06:42.657 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:42.657 spdk_app_start is called in Round 2. 00:06:42.657 Shutdown signal received, stop current app iteration 00:06:42.657 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:42.657 spdk_app_start is called in Round 3. 00:06:42.657 Shutdown signal received, stop current app iteration 00:06:42.657 06:47:03 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:42.657 06:47:03 -- event/event.sh@42 -- # return 0 00:06:42.657 00:06:42.657 real 0m16.469s 00:06:42.657 user 0m35.400s 00:06:42.657 sys 0m2.924s 00:06:42.657 06:47:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.657 06:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:42.657 ************************************ 00:06:42.657 END TEST app_repeat 00:06:42.657 ************************************ 00:06:42.657 06:47:04 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:42.657 06:47:04 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:42.657 06:47:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.657 06:47:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.657 06:47:04 -- common/autotest_common.sh@10 -- # set +x 00:06:42.657 ************************************ 00:06:42.657 START TEST cpu_locks 00:06:42.657 ************************************ 00:06:42.657 06:47:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:42.657 * Looking for test storage... 00:06:42.657 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:42.657 06:47:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:42.657 06:47:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:42.657 06:47:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:42.657 06:47:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:42.657 06:47:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:42.657 06:47:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:42.657 06:47:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:42.657 06:47:04 -- scripts/common.sh@335 -- # IFS=.-: 00:06:42.657 06:47:04 -- scripts/common.sh@335 -- # read -ra ver1 00:06:42.657 06:47:04 -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.657 06:47:04 -- scripts/common.sh@336 -- # read -ra ver2 00:06:42.657 06:47:04 -- scripts/common.sh@337 -- # local 'op=<' 00:06:42.657 06:47:04 -- scripts/common.sh@339 -- # ver1_l=2 00:06:42.657 06:47:04 -- scripts/common.sh@340 -- # ver2_l=1 00:06:42.657 06:47:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:42.657 06:47:04 -- scripts/common.sh@343 -- # case "$op" in 00:06:42.657 06:47:04 -- scripts/common.sh@344 -- # : 1 00:06:42.657 06:47:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:42.657 06:47:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.657 06:47:04 -- scripts/common.sh@364 -- # decimal 1 00:06:42.657 06:47:04 -- scripts/common.sh@352 -- # local d=1 00:06:42.657 06:47:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.657 06:47:04 -- scripts/common.sh@354 -- # echo 1 00:06:42.657 06:47:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:42.657 06:47:04 -- scripts/common.sh@365 -- # decimal 2 00:06:42.657 06:47:04 -- scripts/common.sh@352 -- # local d=2 00:06:42.657 06:47:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.657 06:47:04 -- scripts/common.sh@354 -- # echo 2 00:06:42.657 06:47:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:42.657 06:47:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:42.657 06:47:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:42.657 06:47:04 -- scripts/common.sh@367 -- # return 0 00:06:42.657 06:47:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.657 06:47:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:42.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.657 --rc genhtml_branch_coverage=1 00:06:42.657 --rc genhtml_function_coverage=1 00:06:42.657 --rc genhtml_legend=1 00:06:42.657 --rc geninfo_all_blocks=1 00:06:42.657 --rc geninfo_unexecuted_blocks=1 00:06:42.657 00:06:42.657 ' 00:06:42.657 06:47:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:42.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.657 --rc genhtml_branch_coverage=1 00:06:42.657 --rc genhtml_function_coverage=1 00:06:42.657 --rc genhtml_legend=1 00:06:42.657 --rc geninfo_all_blocks=1 00:06:42.657 --rc geninfo_unexecuted_blocks=1 00:06:42.657 00:06:42.657 ' 00:06:42.657 06:47:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:42.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.657 --rc genhtml_branch_coverage=1 00:06:42.657 --rc genhtml_function_coverage=1 00:06:42.657 --rc genhtml_legend=1 00:06:42.657 --rc geninfo_all_blocks=1 00:06:42.657 --rc geninfo_unexecuted_blocks=1 00:06:42.657 00:06:42.657 ' 00:06:42.657 06:47:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:42.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.657 --rc genhtml_branch_coverage=1 00:06:42.657 --rc genhtml_function_coverage=1 00:06:42.657 --rc genhtml_legend=1 00:06:42.657 --rc geninfo_all_blocks=1 00:06:42.657 --rc geninfo_unexecuted_blocks=1 00:06:42.657 00:06:42.657 ' 00:06:42.657 06:47:04 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:42.657 06:47:04 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:42.657 06:47:04 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:42.657 06:47:04 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:42.657 06:47:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.657 06:47:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.657 06:47:04 -- common/autotest_common.sh@10 -- # set +x 00:06:42.657 ************************************ 00:06:42.657 START TEST default_locks 00:06:42.657 ************************************ 00:06:42.657 06:47:04 -- common/autotest_common.sh@1114 -- # default_locks 00:06:42.658 06:47:04 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1189887 00:06:42.658 06:47:04 -- event/cpu_locks.sh@47 -- # waitforlisten 1189887 00:06:42.658 06:47:04 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.658 06:47:04 -- common/autotest_common.sh@829 -- # '[' -z 1189887 ']' 00:06:42.658 06:47:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.658 06:47:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.658 06:47:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.658 06:47:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.658 06:47:04 -- common/autotest_common.sh@10 -- # set +x 00:06:42.658 [2024-12-15 06:47:04.258841] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:42.658 [2024-12-15 06:47:04.258895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189887 ] 00:06:42.658 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.917 [2024-12-15 06:47:04.329947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.917 [2024-12-15 06:47:04.366323] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.917 [2024-12-15 06:47:04.366448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.485 06:47:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.485 06:47:05 -- common/autotest_common.sh@862 -- # return 0 00:06:43.485 06:47:05 -- event/cpu_locks.sh@49 -- # locks_exist 1189887 00:06:43.485 06:47:05 -- event/cpu_locks.sh@22 -- # lslocks -p 1189887 00:06:43.485 06:47:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.053 lslocks: write error 00:06:44.053 06:47:05 -- event/cpu_locks.sh@50 -- # killprocess 1189887 00:06:44.053 06:47:05 -- common/autotest_common.sh@936 -- # '[' -z 1189887 ']' 00:06:44.053 06:47:05 -- common/autotest_common.sh@940 -- # kill -0 1189887 00:06:44.053 06:47:05 -- common/autotest_common.sh@941 -- # uname 00:06:44.053 06:47:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.053 06:47:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1189887 00:06:44.312 06:47:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:44.312 06:47:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:44.312 06:47:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1189887' 00:06:44.312 killing process with pid 1189887 00:06:44.312 06:47:05 -- common/autotest_common.sh@955 -- # kill 1189887 00:06:44.312 06:47:05 -- common/autotest_common.sh@960 -- # wait 1189887 00:06:44.572 06:47:06 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1189887 00:06:44.572 06:47:06 -- common/autotest_common.sh@650 -- # local es=0 00:06:44.572 06:47:06 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1189887 00:06:44.572 06:47:06 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:44.572 06:47:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.572 06:47:06 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:44.572 06:47:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.572 06:47:06 -- common/autotest_common.sh@653 -- # waitforlisten 1189887 00:06:44.572 06:47:06 -- common/autotest_common.sh@829 -- # '[' -z 1189887 ']' 00:06:44.572 06:47:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.572 06:47:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.572 06:47:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.572 06:47:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.572 06:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:44.572 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1189887) - No such process 00:06:44.572 ERROR: process (pid: 1189887) is no longer running 00:06:44.572 06:47:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.572 06:47:06 -- common/autotest_common.sh@862 -- # return 1 00:06:44.572 06:47:06 -- common/autotest_common.sh@653 -- # es=1 00:06:44.572 06:47:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.572 06:47:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.572 06:47:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.572 06:47:06 -- event/cpu_locks.sh@54 -- # no_locks 00:06:44.572 06:47:06 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.572 06:47:06 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.572 06:47:06 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.572 00:06:44.572 real 0m1.835s 00:06:44.572 user 0m1.939s 00:06:44.572 sys 0m0.697s 00:06:44.572 06:47:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.572 06:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:44.572 ************************************ 00:06:44.572 END TEST default_locks 00:06:44.572 ************************************ 00:06:44.572 06:47:06 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:44.572 06:47:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:44.572 06:47:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.572 06:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:44.572 ************************************ 00:06:44.572 START TEST default_locks_via_rpc 00:06:44.572 ************************************ 00:06:44.572 06:47:06 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:44.572 06:47:06 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1190283 00:06:44.572 06:47:06 -- event/cpu_locks.sh@63 -- # waitforlisten 1190283 00:06:44.572 06:47:06 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.572 06:47:06 -- common/autotest_common.sh@829 -- # '[' -z 1190283 ']' 00:06:44.572 06:47:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.572 06:47:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.572 06:47:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.572 06:47:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.572 06:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:44.572 [2024-12-15 06:47:06.144601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:44.572 [2024-12-15 06:47:06.144655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190283 ] 00:06:44.572 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.832 [2024-12-15 06:47:06.214405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.832 [2024-12-15 06:47:06.249573] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.832 [2024-12-15 06:47:06.249698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.401 06:47:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.401 06:47:06 -- common/autotest_common.sh@862 -- # return 0 00:06:45.401 06:47:06 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:45.401 06:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.401 06:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:45.401 06:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.401 06:47:06 -- event/cpu_locks.sh@67 -- # no_locks 00:06:45.401 06:47:06 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.401 06:47:06 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.401 06:47:06 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.401 06:47:06 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.401 06:47:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.401 06:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:45.401 06:47:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.401 06:47:06 -- event/cpu_locks.sh@71 -- # locks_exist 1190283 00:06:45.401 06:47:06 -- event/cpu_locks.sh@22 -- # lslocks -p 1190283 00:06:45.401 06:47:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.969 06:47:07 -- event/cpu_locks.sh@73 -- # killprocess 1190283 00:06:45.969 06:47:07 -- common/autotest_common.sh@936 -- # '[' -z 1190283 ']' 00:06:45.969 06:47:07 -- common/autotest_common.sh@940 -- # kill -0 1190283 00:06:45.969 06:47:07 -- common/autotest_common.sh@941 -- # uname 00:06:45.969 06:47:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:45.969 06:47:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1190283 00:06:46.228 06:47:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:46.228 06:47:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:46.228 06:47:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1190283' 00:06:46.228 killing process with pid 1190283 00:06:46.228 06:47:07 -- common/autotest_common.sh@955 -- # kill 1190283 00:06:46.228 06:47:07 -- common/autotest_common.sh@960 -- # wait 1190283 00:06:46.488 00:06:46.488 real 0m1.841s 00:06:46.488 user 0m1.961s 00:06:46.488 sys 0m0.706s 00:06:46.488 06:47:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.488 06:47:07 -- common/autotest_common.sh@10 -- # set +x 00:06:46.488 ************************************ 00:06:46.488 END TEST default_locks_via_rpc 00:06:46.488 ************************************ 00:06:46.488 06:47:07 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:46.488 06:47:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:46.488 06:47:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.488 06:47:07 -- common/autotest_common.sh@10 -- # set +x 00:06:46.488 ************************************ 00:06:46.488 START TEST non_locking_app_on_locked_coremask 00:06:46.488 ************************************ 00:06:46.488 06:47:07 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:46.488 06:47:07 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1190738 00:06:46.488 06:47:07 -- event/cpu_locks.sh@81 -- # waitforlisten 1190738 /var/tmp/spdk.sock 00:06:46.488 06:47:07 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.488 06:47:07 -- common/autotest_common.sh@829 -- # '[' -z 1190738 ']' 00:06:46.488 06:47:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.488 06:47:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.488 06:47:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.488 06:47:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.488 06:47:07 -- common/autotest_common.sh@10 -- # set +x 00:06:46.488 [2024-12-15 06:47:08.035247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:46.488 [2024-12-15 06:47:08.035308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190738 ] 00:06:46.488 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.488 [2024-12-15 06:47:08.104073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.747 [2024-12-15 06:47:08.141354] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.747 [2024-12-15 06:47:08.141468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.316 06:47:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.316 06:47:08 -- common/autotest_common.sh@862 -- # return 0 00:06:47.316 06:47:08 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1190759 00:06:47.316 06:47:08 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:47.316 06:47:08 -- event/cpu_locks.sh@85 -- # waitforlisten 1190759 /var/tmp/spdk2.sock 00:06:47.316 06:47:08 -- common/autotest_common.sh@829 -- # '[' -z 1190759 ']' 00:06:47.316 06:47:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.316 06:47:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.316 06:47:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.316 06:47:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.316 06:47:08 -- common/autotest_common.sh@10 -- # set +x 00:06:47.316 [2024-12-15 06:47:08.883993] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:47.316 [2024-12-15 06:47:08.884050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190759 ] 00:06:47.316 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.575 [2024-12-15 06:47:08.977137] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.575 [2024-12-15 06:47:08.977161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.575 [2024-12-15 06:47:09.049039] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.575 [2024-12-15 06:47:09.049154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.144 06:47:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.144 06:47:09 -- common/autotest_common.sh@862 -- # return 0 00:06:48.144 06:47:09 -- event/cpu_locks.sh@87 -- # locks_exist 1190738 00:06:48.144 06:47:09 -- event/cpu_locks.sh@22 -- # lslocks -p 1190738 00:06:48.144 06:47:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.081 lslocks: write error 00:06:49.081 06:47:10 -- event/cpu_locks.sh@89 -- # killprocess 1190738 00:06:49.081 06:47:10 -- common/autotest_common.sh@936 -- # '[' -z 1190738 ']' 00:06:49.081 06:47:10 -- common/autotest_common.sh@940 -- # kill -0 1190738 00:06:49.081 06:47:10 -- common/autotest_common.sh@941 -- # uname 00:06:49.081 06:47:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.081 06:47:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1190738 00:06:49.340 06:47:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:49.340 06:47:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:49.340 06:47:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1190738' 00:06:49.340 killing process with pid 1190738 00:06:49.340 06:47:10 -- common/autotest_common.sh@955 -- # kill 1190738 00:06:49.340 06:47:10 -- common/autotest_common.sh@960 -- # wait 1190738 00:06:49.909 06:47:11 -- event/cpu_locks.sh@90 -- # killprocess 1190759 00:06:49.909 06:47:11 -- common/autotest_common.sh@936 -- # '[' -z 1190759 ']' 00:06:49.909 06:47:11 -- common/autotest_common.sh@940 -- # kill -0 1190759 00:06:49.909 06:47:11 -- common/autotest_common.sh@941 -- # uname 00:06:49.909 06:47:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.909 06:47:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1190759 00:06:49.909 06:47:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:49.909 06:47:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:49.909 06:47:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1190759' 00:06:49.909 killing process with pid 1190759 00:06:49.909 06:47:11 -- common/autotest_common.sh@955 -- # kill 1190759 00:06:49.909 06:47:11 -- common/autotest_common.sh@960 -- # wait 1190759 00:06:50.172 00:06:50.172 real 0m3.705s 00:06:50.172 user 0m4.022s 00:06:50.172 sys 0m1.234s 00:06:50.172 06:47:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.172 06:47:11 -- common/autotest_common.sh@10 -- # set +x 00:06:50.172 ************************************ 00:06:50.172 END TEST non_locking_app_on_locked_coremask 00:06:50.172 ************************************ 00:06:50.172 06:47:11 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:50.172 06:47:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.172 06:47:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.172 06:47:11 -- common/autotest_common.sh@10 -- # set +x 00:06:50.172 ************************************ 00:06:50.172 START TEST locking_app_on_unlocked_coremask 00:06:50.172 ************************************ 00:06:50.172 06:47:11 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:50.172 06:47:11 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1191327 00:06:50.172 06:47:11 -- event/cpu_locks.sh@99 -- # waitforlisten 1191327 /var/tmp/spdk.sock 00:06:50.172 06:47:11 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:50.172 06:47:11 -- common/autotest_common.sh@829 -- # '[' -z 1191327 ']' 00:06:50.172 06:47:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.172 06:47:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.172 06:47:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.172 06:47:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.172 06:47:11 -- common/autotest_common.sh@10 -- # set +x 00:06:50.172 [2024-12-15 06:47:11.794411] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:50.172 [2024-12-15 06:47:11.794464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191327 ] 00:06:50.433 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.433 [2024-12-15 06:47:11.864376] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.433 [2024-12-15 06:47:11.864408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.433 [2024-12-15 06:47:11.896643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.433 [2024-12-15 06:47:11.896760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.001 06:47:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.001 06:47:12 -- common/autotest_common.sh@862 -- # return 0 00:06:51.001 06:47:12 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1191596 00:06:51.001 06:47:12 -- event/cpu_locks.sh@103 -- # waitforlisten 1191596 /var/tmp/spdk2.sock 00:06:51.001 06:47:12 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:51.001 06:47:12 -- common/autotest_common.sh@829 -- # '[' -z 1191596 ']' 00:06:51.001 06:47:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.001 06:47:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.001 06:47:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.001 06:47:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.001 06:47:12 -- common/autotest_common.sh@10 -- # set +x 00:06:51.260 [2024-12-15 06:47:12.644728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:51.260 [2024-12-15 06:47:12.644780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191596 ] 00:06:51.260 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.260 [2024-12-15 06:47:12.739912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.260 [2024-12-15 06:47:12.811796] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.260 [2024-12-15 06:47:12.811919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.829 06:47:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.829 06:47:13 -- common/autotest_common.sh@862 -- # return 0 00:06:51.829 06:47:13 -- event/cpu_locks.sh@105 -- # locks_exist 1191596 00:06:51.829 06:47:13 -- event/cpu_locks.sh@22 -- # lslocks -p 1191596 00:06:51.829 06:47:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.767 lslocks: write error 00:06:52.767 06:47:14 -- event/cpu_locks.sh@107 -- # killprocess 1191327 00:06:52.767 06:47:14 -- common/autotest_common.sh@936 -- # '[' -z 1191327 ']' 00:06:52.767 06:47:14 -- common/autotest_common.sh@940 -- # kill -0 1191327 00:06:52.767 06:47:14 -- common/autotest_common.sh@941 -- # uname 00:06:52.767 06:47:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:52.767 06:47:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1191327 00:06:52.767 06:47:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:52.767 06:47:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:52.767 06:47:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1191327' 00:06:52.767 killing process with pid 1191327 00:06:52.767 06:47:14 -- common/autotest_common.sh@955 -- # kill 1191327 00:06:52.767 06:47:14 -- common/autotest_common.sh@960 -- # wait 1191327 00:06:53.336 06:47:14 -- event/cpu_locks.sh@108 -- # killprocess 1191596 00:06:53.336 06:47:14 -- common/autotest_common.sh@936 -- # '[' -z 1191596 ']' 00:06:53.336 06:47:14 -- common/autotest_common.sh@940 -- # kill -0 1191596 00:06:53.336 06:47:14 -- common/autotest_common.sh@941 -- # uname 00:06:53.336 06:47:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:53.336 06:47:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1191596 00:06:53.336 06:47:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:53.336 06:47:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:53.336 06:47:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1191596' 00:06:53.336 killing process with pid 1191596 00:06:53.336 06:47:14 -- common/autotest_common.sh@955 -- # kill 1191596 00:06:53.336 06:47:14 -- common/autotest_common.sh@960 -- # wait 1191596 00:06:53.908 00:06:53.908 real 0m3.507s 00:06:53.908 user 0m3.782s 00:06:53.908 sys 0m1.124s 00:06:53.908 06:47:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.908 06:47:15 -- common/autotest_common.sh@10 -- # set +x 00:06:53.908 ************************************ 00:06:53.908 END TEST locking_app_on_unlocked_coremask 00:06:53.908 ************************************ 00:06:53.908 06:47:15 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:53.908 06:47:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.908 06:47:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.909 06:47:15 -- common/autotest_common.sh@10 -- # set +x 00:06:53.909 ************************************ 00:06:53.909 START TEST locking_app_on_locked_coremask 00:06:53.909 ************************************ 00:06:53.909 06:47:15 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:53.909 06:47:15 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1192003 00:06:53.909 06:47:15 -- event/cpu_locks.sh@116 -- # waitforlisten 1192003 /var/tmp/spdk.sock 00:06:53.909 06:47:15 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.909 06:47:15 -- common/autotest_common.sh@829 -- # '[' -z 1192003 ']' 00:06:53.909 06:47:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.909 06:47:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.909 06:47:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.909 06:47:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.909 06:47:15 -- common/autotest_common.sh@10 -- # set +x 00:06:53.909 [2024-12-15 06:47:15.345745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:53.909 [2024-12-15 06:47:15.345804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192003 ] 00:06:53.909 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.909 [2024-12-15 06:47:15.415701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.909 [2024-12-15 06:47:15.452564] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.909 [2024-12-15 06:47:15.452684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.846 06:47:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.846 06:47:16 -- common/autotest_common.sh@862 -- # return 0 00:06:54.846 06:47:16 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1192181 00:06:54.846 06:47:16 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1192181 /var/tmp/spdk2.sock 00:06:54.846 06:47:16 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:54.846 06:47:16 -- common/autotest_common.sh@650 -- # local es=0 00:06:54.846 06:47:16 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1192181 /var/tmp/spdk2.sock 00:06:54.846 06:47:16 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:54.846 06:47:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.846 06:47:16 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:54.846 06:47:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.846 06:47:16 -- common/autotest_common.sh@653 -- # waitforlisten 1192181 /var/tmp/spdk2.sock 00:06:54.846 06:47:16 -- common/autotest_common.sh@829 -- # '[' -z 1192181 ']' 00:06:54.846 06:47:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.846 06:47:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.846 06:47:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.846 06:47:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.846 06:47:16 -- common/autotest_common.sh@10 -- # set +x 00:06:54.846 [2024-12-15 06:47:16.197301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:54.846 [2024-12-15 06:47:16.197353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192181 ] 00:06:54.846 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.846 [2024-12-15 06:47:16.292051] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1192003 has claimed it. 00:06:54.846 [2024-12-15 06:47:16.292085] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.415 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1192181) - No such process 00:06:55.415 ERROR: process (pid: 1192181) is no longer running 00:06:55.415 06:47:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.415 06:47:16 -- common/autotest_common.sh@862 -- # return 1 00:06:55.415 06:47:16 -- common/autotest_common.sh@653 -- # es=1 00:06:55.415 06:47:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.415 06:47:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:55.415 06:47:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.415 06:47:16 -- event/cpu_locks.sh@122 -- # locks_exist 1192003 00:06:55.415 06:47:16 -- event/cpu_locks.sh@22 -- # lslocks -p 1192003 00:06:55.415 06:47:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.983 lslocks: write error 00:06:55.983 06:47:17 -- event/cpu_locks.sh@124 -- # killprocess 1192003 00:06:55.983 06:47:17 -- common/autotest_common.sh@936 -- # '[' -z 1192003 ']' 00:06:55.983 06:47:17 -- common/autotest_common.sh@940 -- # kill -0 1192003 00:06:55.983 06:47:17 -- common/autotest_common.sh@941 -- # uname 00:06:55.983 06:47:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:55.983 06:47:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1192003 00:06:55.983 06:47:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:55.983 06:47:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:55.983 06:47:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1192003' 00:06:55.983 killing process with pid 1192003 00:06:55.983 06:47:17 -- common/autotest_common.sh@955 -- # kill 1192003 00:06:55.983 06:47:17 -- common/autotest_common.sh@960 -- # wait 1192003 00:06:56.243 00:06:56.243 real 0m2.378s 00:06:56.243 user 0m2.613s 00:06:56.243 sys 0m0.719s 00:06:56.243 06:47:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.243 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:06:56.243 ************************************ 00:06:56.243 END TEST locking_app_on_locked_coremask 00:06:56.243 ************************************ 00:06:56.243 06:47:17 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:56.243 06:47:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:56.243 06:47:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.243 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:06:56.243 ************************************ 00:06:56.243 START TEST locking_overlapped_coremask 00:06:56.243 ************************************ 00:06:56.243 06:47:17 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:56.243 06:47:17 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1192479 00:06:56.243 06:47:17 -- event/cpu_locks.sh@133 -- # waitforlisten 1192479 /var/tmp/spdk.sock 00:06:56.243 06:47:17 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:56.243 06:47:17 -- common/autotest_common.sh@829 -- # '[' -z 1192479 ']' 00:06:56.243 06:47:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.243 06:47:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.243 06:47:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.243 06:47:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.243 06:47:17 -- common/autotest_common.sh@10 -- # set +x 00:06:56.243 [2024-12-15 06:47:17.771447] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:56.243 [2024-12-15 06:47:17.771504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192479 ] 00:06:56.243 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.243 [2024-12-15 06:47:17.839999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.243 [2024-12-15 06:47:17.873704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:56.243 [2024-12-15 06:47:17.873859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.243 [2024-12-15 06:47:17.873972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.243 [2024-12-15 06:47:17.873980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.181 06:47:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.181 06:47:18 -- common/autotest_common.sh@862 -- # return 0 00:06:57.181 06:47:18 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1192685 00:06:57.181 06:47:18 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1192685 /var/tmp/spdk2.sock 00:06:57.181 06:47:18 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:57.181 06:47:18 -- common/autotest_common.sh@650 -- # local es=0 00:06:57.181 06:47:18 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1192685 /var/tmp/spdk2.sock 00:06:57.181 06:47:18 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:57.181 06:47:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.181 06:47:18 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:57.181 06:47:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.181 06:47:18 -- common/autotest_common.sh@653 -- # waitforlisten 1192685 /var/tmp/spdk2.sock 00:06:57.181 06:47:18 -- common/autotest_common.sh@829 -- # '[' -z 1192685 ']' 00:06:57.181 06:47:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.181 06:47:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.181 06:47:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.181 06:47:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.181 06:47:18 -- common/autotest_common.sh@10 -- # set +x 00:06:57.181 [2024-12-15 06:47:18.631083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:57.181 [2024-12-15 06:47:18.631137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192685 ] 00:06:57.181 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.181 [2024-12-15 06:47:18.727833] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1192479 has claimed it. 00:06:57.181 [2024-12-15 06:47:18.727879] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:57.748 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1192685) - No such process 00:06:57.748 ERROR: process (pid: 1192685) is no longer running 00:06:57.748 06:47:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.748 06:47:19 -- common/autotest_common.sh@862 -- # return 1 00:06:57.748 06:47:19 -- common/autotest_common.sh@653 -- # es=1 00:06:57.748 06:47:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.748 06:47:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.748 06:47:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.748 06:47:19 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:57.748 06:47:19 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:57.748 06:47:19 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:57.749 06:47:19 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:57.749 06:47:19 -- event/cpu_locks.sh@141 -- # killprocess 1192479 00:06:57.749 06:47:19 -- common/autotest_common.sh@936 -- # '[' -z 1192479 ']' 00:06:57.749 06:47:19 -- common/autotest_common.sh@940 -- # kill -0 1192479 00:06:57.749 06:47:19 -- common/autotest_common.sh@941 -- # uname 00:06:57.749 06:47:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:57.749 06:47:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1192479 00:06:57.749 06:47:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:57.749 06:47:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:57.749 06:47:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1192479' 00:06:57.749 killing process with pid 1192479 00:06:57.749 06:47:19 -- common/autotest_common.sh@955 -- # kill 1192479 00:06:57.749 06:47:19 -- common/autotest_common.sh@960 -- # wait 1192479 00:06:58.008 00:06:58.008 real 0m1.901s 00:06:58.008 user 0m5.448s 00:06:58.008 sys 0m0.453s 00:06:58.008 06:47:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.008 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:06:58.008 ************************************ 00:06:58.008 END TEST locking_overlapped_coremask 00:06:58.008 ************************************ 00:06:58.267 06:47:19 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:58.267 06:47:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.267 06:47:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.267 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:06:58.267 ************************************ 00:06:58.267 START TEST locking_overlapped_coremask_via_rpc 00:06:58.267 ************************************ 00:06:58.267 06:47:19 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:58.267 06:47:19 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1192795 00:06:58.267 06:47:19 -- event/cpu_locks.sh@149 -- # waitforlisten 1192795 /var/tmp/spdk.sock 00:06:58.267 06:47:19 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:58.267 06:47:19 -- common/autotest_common.sh@829 -- # '[' -z 1192795 ']' 00:06:58.267 06:47:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.267 06:47:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.267 06:47:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.267 06:47:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.267 06:47:19 -- common/autotest_common.sh@10 -- # set +x 00:06:58.267 [2024-12-15 06:47:19.727701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:58.267 [2024-12-15 06:47:19.727758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192795 ] 00:06:58.267 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.267 [2024-12-15 06:47:19.797997] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.267 [2024-12-15 06:47:19.798023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.267 [2024-12-15 06:47:19.835891] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.268 [2024-12-15 06:47:19.836052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.268 [2024-12-15 06:47:19.836145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.268 [2024-12-15 06:47:19.836147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.205 06:47:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.205 06:47:20 -- common/autotest_common.sh@862 -- # return 0 00:06:59.205 06:47:20 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1193061 00:06:59.205 06:47:20 -- event/cpu_locks.sh@153 -- # waitforlisten 1193061 /var/tmp/spdk2.sock 00:06:59.206 06:47:20 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:59.206 06:47:20 -- common/autotest_common.sh@829 -- # '[' -z 1193061 ']' 00:06:59.206 06:47:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.206 06:47:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.206 06:47:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.206 06:47:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.206 06:47:20 -- common/autotest_common.sh@10 -- # set +x 00:06:59.206 [2024-12-15 06:47:20.594106] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:59.206 [2024-12-15 06:47:20.594154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193061 ] 00:06:59.206 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.206 [2024-12-15 06:47:20.694130] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.206 [2024-12-15 06:47:20.694155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.206 [2024-12-15 06:47:20.767796] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:59.206 [2024-12-15 06:47:20.767951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.206 [2024-12-15 06:47:20.768009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:59.206 [2024-12-15 06:47:20.768007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.773 06:47:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.773 06:47:21 -- common/autotest_common.sh@862 -- # return 0 00:06:59.773 06:47:21 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:59.773 06:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.773 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:07:00.033 06:47:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.033 06:47:21 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.033 06:47:21 -- common/autotest_common.sh@650 -- # local es=0 00:07:00.033 06:47:21 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.033 06:47:21 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:00.033 06:47:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.033 06:47:21 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:00.033 06:47:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.033 06:47:21 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.033 06:47:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.033 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:07:00.033 [2024-12-15 06:47:21.422049] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1192795 has claimed it. 00:07:00.033 request: 00:07:00.033 { 00:07:00.033 "method": "framework_enable_cpumask_locks", 00:07:00.033 "req_id": 1 00:07:00.033 } 00:07:00.033 Got JSON-RPC error response 00:07:00.033 response: 00:07:00.033 { 00:07:00.033 "code": -32603, 00:07:00.033 "message": "Failed to claim CPU core: 2" 00:07:00.033 } 00:07:00.033 06:47:21 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:00.033 06:47:21 -- common/autotest_common.sh@653 -- # es=1 00:07:00.033 06:47:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.033 06:47:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.033 06:47:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.033 06:47:21 -- event/cpu_locks.sh@158 -- # waitforlisten 1192795 /var/tmp/spdk.sock 00:07:00.033 06:47:21 -- common/autotest_common.sh@829 -- # '[' -z 1192795 ']' 00:07:00.033 06:47:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.033 06:47:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.033 06:47:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.033 06:47:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.033 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:07:00.033 06:47:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.033 06:47:21 -- common/autotest_common.sh@862 -- # return 0 00:07:00.033 06:47:21 -- event/cpu_locks.sh@159 -- # waitforlisten 1193061 /var/tmp/spdk2.sock 00:07:00.033 06:47:21 -- common/autotest_common.sh@829 -- # '[' -z 1193061 ']' 00:07:00.033 06:47:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.033 06:47:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.033 06:47:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.033 06:47:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.033 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:07:00.292 06:47:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.292 06:47:21 -- common/autotest_common.sh@862 -- # return 0 00:07:00.292 06:47:21 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:00.292 06:47:21 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:00.292 06:47:21 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:00.292 06:47:21 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:00.292 00:07:00.292 real 0m2.144s 00:07:00.292 user 0m0.863s 00:07:00.292 sys 0m0.210s 00:07:00.292 06:47:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.292 06:47:21 -- common/autotest_common.sh@10 -- # set +x 00:07:00.292 ************************************ 00:07:00.292 END TEST locking_overlapped_coremask_via_rpc 00:07:00.292 ************************************ 00:07:00.292 06:47:21 -- event/cpu_locks.sh@174 -- # cleanup 00:07:00.292 06:47:21 -- event/cpu_locks.sh@15 -- # [[ -z 1192795 ]] 00:07:00.292 06:47:21 -- event/cpu_locks.sh@15 -- # killprocess 1192795 00:07:00.292 06:47:21 -- common/autotest_common.sh@936 -- # '[' -z 1192795 ']' 00:07:00.292 06:47:21 -- common/autotest_common.sh@940 -- # kill -0 1192795 00:07:00.292 06:47:21 -- common/autotest_common.sh@941 -- # uname 00:07:00.292 06:47:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:00.292 06:47:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1192795 00:07:00.292 06:47:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:00.292 06:47:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:00.292 06:47:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1192795' 00:07:00.292 killing process with pid 1192795 00:07:00.292 06:47:21 -- common/autotest_common.sh@955 -- # kill 1192795 00:07:00.292 06:47:21 -- common/autotest_common.sh@960 -- # wait 1192795 00:07:00.861 06:47:22 -- event/cpu_locks.sh@16 -- # [[ -z 1193061 ]] 00:07:00.861 06:47:22 -- event/cpu_locks.sh@16 -- # killprocess 1193061 00:07:00.861 06:47:22 -- common/autotest_common.sh@936 -- # '[' -z 1193061 ']' 00:07:00.861 06:47:22 -- common/autotest_common.sh@940 -- # kill -0 1193061 00:07:00.861 06:47:22 -- common/autotest_common.sh@941 -- # uname 00:07:00.861 06:47:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:00.861 06:47:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1193061 00:07:00.861 06:47:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:00.861 06:47:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:00.861 06:47:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1193061' 00:07:00.861 killing process with pid 1193061 00:07:00.861 06:47:22 -- common/autotest_common.sh@955 -- # kill 1193061 00:07:00.861 06:47:22 -- common/autotest_common.sh@960 -- # wait 1193061 00:07:01.125 06:47:22 -- event/cpu_locks.sh@18 -- # rm -f 00:07:01.125 06:47:22 -- event/cpu_locks.sh@1 -- # cleanup 00:07:01.125 06:47:22 -- event/cpu_locks.sh@15 -- # [[ -z 1192795 ]] 00:07:01.125 06:47:22 -- event/cpu_locks.sh@15 -- # killprocess 1192795 00:07:01.125 06:47:22 -- common/autotest_common.sh@936 -- # '[' -z 1192795 ']' 00:07:01.125 06:47:22 -- common/autotest_common.sh@940 -- # kill -0 1192795 00:07:01.126 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1192795) - No such process 00:07:01.126 06:47:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1192795 is not found' 00:07:01.126 Process with pid 1192795 is not found 00:07:01.126 06:47:22 -- event/cpu_locks.sh@16 -- # [[ -z 1193061 ]] 00:07:01.126 06:47:22 -- event/cpu_locks.sh@16 -- # killprocess 1193061 00:07:01.126 06:47:22 -- common/autotest_common.sh@936 -- # '[' -z 1193061 ']' 00:07:01.126 06:47:22 -- common/autotest_common.sh@940 -- # kill -0 1193061 00:07:01.126 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1193061) - No such process 00:07:01.126 06:47:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1193061 is not found' 00:07:01.126 Process with pid 1193061 is not found 00:07:01.126 06:47:22 -- event/cpu_locks.sh@18 -- # rm -f 00:07:01.126 00:07:01.126 real 0m18.570s 00:07:01.126 user 0m31.517s 00:07:01.126 sys 0m6.133s 00:07:01.126 06:47:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.126 06:47:22 -- common/autotest_common.sh@10 -- # set +x 00:07:01.126 ************************************ 00:07:01.126 END TEST cpu_locks 00:07:01.126 ************************************ 00:07:01.126 00:07:01.126 real 0m43.644s 00:07:01.126 user 1m21.403s 00:07:01.126 sys 0m10.063s 00:07:01.126 06:47:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.126 06:47:22 -- common/autotest_common.sh@10 -- # set +x 00:07:01.126 ************************************ 00:07:01.126 END TEST event 00:07:01.126 ************************************ 00:07:01.126 06:47:22 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:01.126 06:47:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.126 06:47:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.126 06:47:22 -- common/autotest_common.sh@10 -- # set +x 00:07:01.126 ************************************ 00:07:01.126 START TEST thread 00:07:01.126 ************************************ 00:07:01.126 06:47:22 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:01.387 * Looking for test storage... 00:07:01.387 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:01.387 06:47:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:01.387 06:47:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:01.387 06:47:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:01.387 06:47:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:01.387 06:47:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:01.387 06:47:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:01.387 06:47:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:01.387 06:47:22 -- scripts/common.sh@335 -- # IFS=.-: 00:07:01.387 06:47:22 -- scripts/common.sh@335 -- # read -ra ver1 00:07:01.387 06:47:22 -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.387 06:47:22 -- scripts/common.sh@336 -- # read -ra ver2 00:07:01.387 06:47:22 -- scripts/common.sh@337 -- # local 'op=<' 00:07:01.387 06:47:22 -- scripts/common.sh@339 -- # ver1_l=2 00:07:01.387 06:47:22 -- scripts/common.sh@340 -- # ver2_l=1 00:07:01.387 06:47:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:01.387 06:47:22 -- scripts/common.sh@343 -- # case "$op" in 00:07:01.387 06:47:22 -- scripts/common.sh@344 -- # : 1 00:07:01.387 06:47:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:01.387 06:47:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.387 06:47:22 -- scripts/common.sh@364 -- # decimal 1 00:07:01.387 06:47:22 -- scripts/common.sh@352 -- # local d=1 00:07:01.387 06:47:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.387 06:47:22 -- scripts/common.sh@354 -- # echo 1 00:07:01.387 06:47:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:01.387 06:47:22 -- scripts/common.sh@365 -- # decimal 2 00:07:01.387 06:47:22 -- scripts/common.sh@352 -- # local d=2 00:07:01.387 06:47:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.387 06:47:22 -- scripts/common.sh@354 -- # echo 2 00:07:01.387 06:47:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:01.387 06:47:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:01.387 06:47:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:01.387 06:47:22 -- scripts/common.sh@367 -- # return 0 00:07:01.387 06:47:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.387 06:47:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:01.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.387 --rc genhtml_branch_coverage=1 00:07:01.387 --rc genhtml_function_coverage=1 00:07:01.387 --rc genhtml_legend=1 00:07:01.387 --rc geninfo_all_blocks=1 00:07:01.387 --rc geninfo_unexecuted_blocks=1 00:07:01.387 00:07:01.387 ' 00:07:01.387 06:47:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:01.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.387 --rc genhtml_branch_coverage=1 00:07:01.387 --rc genhtml_function_coverage=1 00:07:01.387 --rc genhtml_legend=1 00:07:01.387 --rc geninfo_all_blocks=1 00:07:01.387 --rc geninfo_unexecuted_blocks=1 00:07:01.387 00:07:01.387 ' 00:07:01.387 06:47:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:01.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.387 --rc genhtml_branch_coverage=1 00:07:01.387 --rc genhtml_function_coverage=1 00:07:01.387 --rc genhtml_legend=1 00:07:01.387 --rc geninfo_all_blocks=1 00:07:01.387 --rc geninfo_unexecuted_blocks=1 00:07:01.387 00:07:01.387 ' 00:07:01.387 06:47:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:01.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.387 --rc genhtml_branch_coverage=1 00:07:01.387 --rc genhtml_function_coverage=1 00:07:01.387 --rc genhtml_legend=1 00:07:01.387 --rc geninfo_all_blocks=1 00:07:01.387 --rc geninfo_unexecuted_blocks=1 00:07:01.387 00:07:01.387 ' 00:07:01.387 06:47:22 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:01.387 06:47:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:01.387 06:47:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.387 06:47:22 -- common/autotest_common.sh@10 -- # set +x 00:07:01.387 ************************************ 00:07:01.387 START TEST thread_poller_perf 00:07:01.387 ************************************ 00:07:01.387 06:47:22 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:01.387 [2024-12-15 06:47:22.891851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:01.387 [2024-12-15 06:47:22.891934] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193460 ] 00:07:01.387 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.387 [2024-12-15 06:47:22.963932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.387 [2024-12-15 06:47:23.000065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.387 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:02.767 [2024-12-15T05:47:24.408Z] ====================================== 00:07:02.767 [2024-12-15T05:47:24.408Z] busy:2509022424 (cyc) 00:07:02.767 [2024-12-15T05:47:24.408Z] total_run_count: 413000 00:07:02.767 [2024-12-15T05:47:24.408Z] tsc_hz: 2500000000 (cyc) 00:07:02.767 [2024-12-15T05:47:24.408Z] ====================================== 00:07:02.767 [2024-12-15T05:47:24.408Z] poller_cost: 6075 (cyc), 2430 (nsec) 00:07:02.767 00:07:02.767 real 0m1.191s 00:07:02.767 user 0m1.096s 00:07:02.767 sys 0m0.091s 00:07:02.767 06:47:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.767 06:47:24 -- common/autotest_common.sh@10 -- # set +x 00:07:02.767 ************************************ 00:07:02.767 END TEST thread_poller_perf 00:07:02.767 ************************************ 00:07:02.767 06:47:24 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:02.767 06:47:24 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:02.767 06:47:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.767 06:47:24 -- common/autotest_common.sh@10 -- # set +x 00:07:02.767 ************************************ 00:07:02.767 START TEST thread_poller_perf 00:07:02.767 ************************************ 00:07:02.767 06:47:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:02.767 [2024-12-15 06:47:24.111227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:02.767 [2024-12-15 06:47:24.111294] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193726 ] 00:07:02.767 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.767 [2024-12-15 06:47:24.180027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.767 [2024-12-15 06:47:24.214235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.767 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:03.705 [2024-12-15T05:47:25.346Z] ====================================== 00:07:03.705 [2024-12-15T05:47:25.346Z] busy:2502278174 (cyc) 00:07:03.705 [2024-12-15T05:47:25.346Z] total_run_count: 5614000 00:07:03.705 [2024-12-15T05:47:25.346Z] tsc_hz: 2500000000 (cyc) 00:07:03.705 [2024-12-15T05:47:25.346Z] ====================================== 00:07:03.705 [2024-12-15T05:47:25.346Z] poller_cost: 445 (cyc), 178 (nsec) 00:07:03.705 00:07:03.705 real 0m1.171s 00:07:03.705 user 0m1.092s 00:07:03.705 sys 0m0.074s 00:07:03.705 06:47:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.705 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.705 ************************************ 00:07:03.705 END TEST thread_poller_perf 00:07:03.705 ************************************ 00:07:03.705 06:47:25 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:03.705 00:07:03.705 real 0m2.627s 00:07:03.705 user 0m2.320s 00:07:03.705 sys 0m0.326s 00:07:03.705 06:47:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.705 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.705 ************************************ 00:07:03.705 END TEST thread 00:07:03.705 ************************************ 00:07:03.965 06:47:25 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:03.965 06:47:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:03.965 06:47:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.965 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.965 ************************************ 00:07:03.965 START TEST accel 00:07:03.965 ************************************ 00:07:03.965 06:47:25 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:03.965 * Looking for test storage... 00:07:03.965 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:03.965 06:47:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:03.965 06:47:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:03.965 06:47:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:03.965 06:47:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:03.965 06:47:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:03.965 06:47:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:03.965 06:47:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:03.965 06:47:25 -- scripts/common.sh@335 -- # IFS=.-: 00:07:03.965 06:47:25 -- scripts/common.sh@335 -- # read -ra ver1 00:07:03.965 06:47:25 -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.965 06:47:25 -- scripts/common.sh@336 -- # read -ra ver2 00:07:03.965 06:47:25 -- scripts/common.sh@337 -- # local 'op=<' 00:07:03.965 06:47:25 -- scripts/common.sh@339 -- # ver1_l=2 00:07:03.965 06:47:25 -- scripts/common.sh@340 -- # ver2_l=1 00:07:03.965 06:47:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:03.965 06:47:25 -- scripts/common.sh@343 -- # case "$op" in 00:07:03.965 06:47:25 -- scripts/common.sh@344 -- # : 1 00:07:03.965 06:47:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:03.965 06:47:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.965 06:47:25 -- scripts/common.sh@364 -- # decimal 1 00:07:03.965 06:47:25 -- scripts/common.sh@352 -- # local d=1 00:07:03.965 06:47:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.965 06:47:25 -- scripts/common.sh@354 -- # echo 1 00:07:03.965 06:47:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:03.965 06:47:25 -- scripts/common.sh@365 -- # decimal 2 00:07:03.965 06:47:25 -- scripts/common.sh@352 -- # local d=2 00:07:03.965 06:47:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.965 06:47:25 -- scripts/common.sh@354 -- # echo 2 00:07:03.965 06:47:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:03.965 06:47:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:03.965 06:47:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:03.965 06:47:25 -- scripts/common.sh@367 -- # return 0 00:07:03.965 06:47:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.965 06:47:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:03.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.965 --rc genhtml_branch_coverage=1 00:07:03.965 --rc genhtml_function_coverage=1 00:07:03.965 --rc genhtml_legend=1 00:07:03.965 --rc geninfo_all_blocks=1 00:07:03.965 --rc geninfo_unexecuted_blocks=1 00:07:03.965 00:07:03.965 ' 00:07:03.965 06:47:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:03.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.965 --rc genhtml_branch_coverage=1 00:07:03.965 --rc genhtml_function_coverage=1 00:07:03.965 --rc genhtml_legend=1 00:07:03.965 --rc geninfo_all_blocks=1 00:07:03.965 --rc geninfo_unexecuted_blocks=1 00:07:03.965 00:07:03.965 ' 00:07:03.965 06:47:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:03.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.965 --rc genhtml_branch_coverage=1 00:07:03.965 --rc genhtml_function_coverage=1 00:07:03.965 --rc genhtml_legend=1 00:07:03.965 --rc geninfo_all_blocks=1 00:07:03.965 --rc geninfo_unexecuted_blocks=1 00:07:03.965 00:07:03.965 ' 00:07:03.965 06:47:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:03.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.965 --rc genhtml_branch_coverage=1 00:07:03.965 --rc genhtml_function_coverage=1 00:07:03.965 --rc genhtml_legend=1 00:07:03.965 --rc geninfo_all_blocks=1 00:07:03.965 --rc geninfo_unexecuted_blocks=1 00:07:03.965 00:07:03.965 ' 00:07:03.965 06:47:25 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:03.965 06:47:25 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:03.965 06:47:25 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:03.965 06:47:25 -- accel/accel.sh@59 -- # spdk_tgt_pid=1194061 00:07:03.965 06:47:25 -- accel/accel.sh@60 -- # waitforlisten 1194061 00:07:03.965 06:47:25 -- common/autotest_common.sh@829 -- # '[' -z 1194061 ']' 00:07:03.965 06:47:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.965 06:47:25 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:03.965 06:47:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.965 06:47:25 -- accel/accel.sh@58 -- # build_accel_config 00:07:03.965 06:47:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.965 06:47:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.965 06:47:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.965 06:47:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.965 06:47:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.965 06:47:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.965 06:47:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.965 06:47:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.965 06:47:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.965 06:47:25 -- accel/accel.sh@42 -- # jq -r . 00:07:03.965 [2024-12-15 06:47:25.592290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:03.965 [2024-12-15 06:47:25.592342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194061 ] 00:07:04.225 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.225 [2024-12-15 06:47:25.657340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.225 [2024-12-15 06:47:25.694025] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:04.225 [2024-12-15 06:47:25.694148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.793 06:47:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.793 06:47:26 -- common/autotest_common.sh@862 -- # return 0 00:07:04.793 06:47:26 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:04.793 06:47:26 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:04.793 06:47:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.793 06:47:26 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:04.793 06:47:26 -- common/autotest_common.sh@10 -- # set +x 00:07:04.793 06:47:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # IFS== 00:07:05.053 06:47:26 -- accel/accel.sh@64 -- # read -r opc module 00:07:05.053 06:47:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:05.053 06:47:26 -- accel/accel.sh@67 -- # killprocess 1194061 00:07:05.053 06:47:26 -- common/autotest_common.sh@936 -- # '[' -z 1194061 ']' 00:07:05.053 06:47:26 -- common/autotest_common.sh@940 -- # kill -0 1194061 00:07:05.053 06:47:26 -- common/autotest_common.sh@941 -- # uname 00:07:05.053 06:47:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:05.053 06:47:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1194061 00:07:05.053 06:47:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:05.053 06:47:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:05.053 06:47:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1194061' 00:07:05.053 killing process with pid 1194061 00:07:05.053 06:47:26 -- common/autotest_common.sh@955 -- # kill 1194061 00:07:05.053 06:47:26 -- common/autotest_common.sh@960 -- # wait 1194061 00:07:05.313 06:47:26 -- accel/accel.sh@68 -- # trap - ERR 00:07:05.313 06:47:26 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:05.313 06:47:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:05.313 06:47:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.313 06:47:26 -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 06:47:26 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:07:05.313 06:47:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:05.313 06:47:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.313 06:47:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.313 06:47:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.313 06:47:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.313 06:47:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.313 06:47:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.313 06:47:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.313 06:47:26 -- accel/accel.sh@42 -- # jq -r . 00:07:05.313 06:47:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.313 06:47:26 -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 06:47:26 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:05.313 06:47:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:05.313 06:47:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.313 06:47:26 -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 ************************************ 00:07:05.313 START TEST accel_missing_filename 00:07:05.313 ************************************ 00:07:05.313 06:47:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:07:05.313 06:47:26 -- common/autotest_common.sh@650 -- # local es=0 00:07:05.313 06:47:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:05.313 06:47:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:05.313 06:47:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.313 06:47:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:05.313 06:47:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.313 06:47:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:05.313 06:47:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:05.313 06:47:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.313 06:47:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.313 06:47:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.313 06:47:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.313 06:47:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.313 06:47:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.313 06:47:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.313 06:47:26 -- accel/accel.sh@42 -- # jq -r . 00:07:05.313 [2024-12-15 06:47:26.923762] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:05.313 [2024-12-15 06:47:26.923847] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194360 ] 00:07:05.573 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.573 [2024-12-15 06:47:26.996920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.573 [2024-12-15 06:47:27.032620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.573 [2024-12-15 06:47:27.073303] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.573 [2024-12-15 06:47:27.133010] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:05.573 A filename is required. 00:07:05.573 06:47:27 -- common/autotest_common.sh@653 -- # es=234 00:07:05.573 06:47:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.573 06:47:27 -- common/autotest_common.sh@662 -- # es=106 00:07:05.573 06:47:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:05.573 06:47:27 -- common/autotest_common.sh@670 -- # es=1 00:07:05.573 06:47:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.573 00:07:05.573 real 0m0.302s 00:07:05.573 user 0m0.194s 00:07:05.573 sys 0m0.146s 00:07:05.573 06:47:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.573 06:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:05.573 ************************************ 00:07:05.573 END TEST accel_missing_filename 00:07:05.573 ************************************ 00:07:05.832 06:47:27 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:05.832 06:47:27 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:05.832 06:47:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.832 06:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:05.832 ************************************ 00:07:05.832 START TEST accel_compress_verify 00:07:05.832 ************************************ 00:07:05.832 06:47:27 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:05.832 06:47:27 -- common/autotest_common.sh@650 -- # local es=0 00:07:05.832 06:47:27 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:05.832 06:47:27 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:05.832 06:47:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.832 06:47:27 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:05.832 06:47:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.832 06:47:27 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:05.832 06:47:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:05.832 06:47:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.832 06:47:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.832 06:47:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.832 06:47:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.832 06:47:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.832 06:47:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.832 06:47:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.832 06:47:27 -- accel/accel.sh@42 -- # jq -r . 00:07:05.832 [2024-12-15 06:47:27.271553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:05.833 [2024-12-15 06:47:27.271614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194384 ] 00:07:05.833 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.833 [2024-12-15 06:47:27.342376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.833 [2024-12-15 06:47:27.377677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.833 [2024-12-15 06:47:27.419168] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.092 [2024-12-15 06:47:27.478300] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:06.092 00:07:06.092 Compression does not support the verify option, aborting. 00:07:06.092 06:47:27 -- common/autotest_common.sh@653 -- # es=161 00:07:06.092 06:47:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.092 06:47:27 -- common/autotest_common.sh@662 -- # es=33 00:07:06.092 06:47:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:06.092 06:47:27 -- common/autotest_common.sh@670 -- # es=1 00:07:06.092 06:47:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.092 00:07:06.092 real 0m0.298s 00:07:06.092 user 0m0.206s 00:07:06.092 sys 0m0.131s 00:07:06.092 06:47:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.092 06:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:06.092 ************************************ 00:07:06.092 END TEST accel_compress_verify 00:07:06.092 ************************************ 00:07:06.092 06:47:27 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:06.092 06:47:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:06.092 06:47:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.092 06:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:06.092 ************************************ 00:07:06.092 START TEST accel_wrong_workload 00:07:06.092 ************************************ 00:07:06.092 06:47:27 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:07:06.092 06:47:27 -- common/autotest_common.sh@650 -- # local es=0 00:07:06.092 06:47:27 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:06.092 06:47:27 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:06.092 06:47:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.092 06:47:27 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:06.092 06:47:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.092 06:47:27 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:06.092 06:47:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:06.092 06:47:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.092 06:47:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.092 06:47:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.092 06:47:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.092 06:47:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.092 06:47:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.092 06:47:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.092 06:47:27 -- accel/accel.sh@42 -- # jq -r . 00:07:06.092 Unsupported workload type: foobar 00:07:06.092 [2024-12-15 06:47:27.613990] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:06.092 accel_perf options: 00:07:06.092 [-h help message] 00:07:06.092 [-q queue depth per core] 00:07:06.092 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:06.092 [-T number of threads per core 00:07:06.092 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:06.092 [-t time in seconds] 00:07:06.092 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:06.092 [ dif_verify, , dif_generate, dif_generate_copy 00:07:06.092 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:06.092 [-l for compress/decompress workloads, name of uncompressed input file 00:07:06.092 [-S for crc32c workload, use this seed value (default 0) 00:07:06.092 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:06.092 [-f for fill workload, use this BYTE value (default 255) 00:07:06.092 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:06.092 [-y verify result if this switch is on] 00:07:06.092 [-a tasks to allocate per core (default: same value as -q)] 00:07:06.092 Can be used to spread operations across a wider range of memory. 00:07:06.092 06:47:27 -- common/autotest_common.sh@653 -- # es=1 00:07:06.092 06:47:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.092 06:47:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.092 06:47:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.092 00:07:06.092 real 0m0.035s 00:07:06.092 user 0m0.014s 00:07:06.092 sys 0m0.021s 00:07:06.092 06:47:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.092 06:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:06.092 ************************************ 00:07:06.092 END TEST accel_wrong_workload 00:07:06.092 ************************************ 00:07:06.092 Error: writing output failed: Broken pipe 00:07:06.092 06:47:27 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:06.092 06:47:27 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:06.092 06:47:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.092 06:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:06.092 ************************************ 00:07:06.092 START TEST accel_negative_buffers 00:07:06.092 ************************************ 00:07:06.092 06:47:27 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:06.092 06:47:27 -- common/autotest_common.sh@650 -- # local es=0 00:07:06.092 06:47:27 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:06.092 06:47:27 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:06.092 06:47:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.092 06:47:27 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:06.092 06:47:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.092 06:47:27 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:06.092 06:47:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:06.092 06:47:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.092 06:47:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.092 06:47:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.092 06:47:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.093 06:47:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.093 06:47:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.093 06:47:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.093 06:47:27 -- accel/accel.sh@42 -- # jq -r . 00:07:06.093 -x option must be non-negative. 00:07:06.093 [2024-12-15 06:47:27.678919] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:06.093 accel_perf options: 00:07:06.093 [-h help message] 00:07:06.093 [-q queue depth per core] 00:07:06.093 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:06.093 [-T number of threads per core 00:07:06.093 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:06.093 [-t time in seconds] 00:07:06.093 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:06.093 [ dif_verify, , dif_generate, dif_generate_copy 00:07:06.093 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:06.093 [-l for compress/decompress workloads, name of uncompressed input file 00:07:06.093 [-S for crc32c workload, use this seed value (default 0) 00:07:06.093 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:06.093 [-f for fill workload, use this BYTE value (default 255) 00:07:06.093 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:06.093 [-y verify result if this switch is on] 00:07:06.093 [-a tasks to allocate per core (default: same value as -q)] 00:07:06.093 Can be used to spread operations across a wider range of memory. 00:07:06.093 06:47:27 -- common/autotest_common.sh@653 -- # es=1 00:07:06.093 06:47:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.093 06:47:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.093 06:47:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.093 00:07:06.093 real 0m0.023s 00:07:06.093 user 0m0.010s 00:07:06.093 sys 0m0.013s 00:07:06.093 06:47:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.093 06:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:06.093 ************************************ 00:07:06.093 END TEST accel_negative_buffers 00:07:06.093 ************************************ 00:07:06.093 Error: writing output failed: Broken pipe 00:07:06.093 06:47:27 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:06.093 06:47:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:06.093 06:47:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.093 06:47:27 -- common/autotest_common.sh@10 -- # set +x 00:07:06.093 ************************************ 00:07:06.093 START TEST accel_crc32c 00:07:06.093 ************************************ 00:07:06.093 06:47:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:06.093 06:47:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.093 06:47:27 -- accel/accel.sh@17 -- # local accel_module 00:07:06.093 06:47:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:06.093 06:47:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:06.093 06:47:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.352 06:47:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.352 06:47:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.352 06:47:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.352 06:47:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.352 06:47:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.352 06:47:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.352 06:47:27 -- accel/accel.sh@42 -- # jq -r . 00:07:06.352 [2024-12-15 06:47:27.753602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:06.352 [2024-12-15 06:47:27.753657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194553 ] 00:07:06.352 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.352 [2024-12-15 06:47:27.823104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.352 [2024-12-15 06:47:27.858706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.732 06:47:29 -- accel/accel.sh@18 -- # out=' 00:07:07.732 SPDK Configuration: 00:07:07.732 Core mask: 0x1 00:07:07.732 00:07:07.732 Accel Perf Configuration: 00:07:07.732 Workload Type: crc32c 00:07:07.732 CRC-32C seed: 32 00:07:07.732 Transfer size: 4096 bytes 00:07:07.732 Vector count 1 00:07:07.732 Module: software 00:07:07.732 Queue depth: 32 00:07:07.732 Allocate depth: 32 00:07:07.732 # threads/core: 1 00:07:07.732 Run time: 1 seconds 00:07:07.732 Verify: Yes 00:07:07.732 00:07:07.732 Running for 1 seconds... 00:07:07.732 00:07:07.732 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.732 ------------------------------------------------------------------------------------ 00:07:07.732 0,0 608800/s 2378 MiB/s 0 0 00:07:07.732 ==================================================================================== 00:07:07.732 Total 608800/s 2378 MiB/s 0 0' 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.732 06:47:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:07.732 06:47:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:07.732 06:47:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.732 06:47:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.732 06:47:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.732 06:47:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.732 06:47:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.732 06:47:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.732 06:47:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.732 06:47:29 -- accel/accel.sh@42 -- # jq -r . 00:07:07.732 [2024-12-15 06:47:29.052185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.732 [2024-12-15 06:47:29.052274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194716 ] 00:07:07.732 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.732 [2024-12-15 06:47:29.125148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.732 [2024-12-15 06:47:29.158823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.732 06:47:29 -- accel/accel.sh@21 -- # val= 00:07:07.732 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.732 06:47:29 -- accel/accel.sh@21 -- # val= 00:07:07.732 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.732 06:47:29 -- accel/accel.sh@21 -- # val=0x1 00:07:07.732 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.732 06:47:29 -- accel/accel.sh@21 -- # val= 00:07:07.732 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.732 06:47:29 -- accel/accel.sh@21 -- # val= 00:07:07.732 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.732 06:47:29 -- accel/accel.sh@21 -- # val=crc32c 00:07:07.732 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.732 06:47:29 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.732 06:47:29 -- accel/accel.sh@21 -- # val=32 00:07:07.732 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.732 06:47:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.732 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.732 06:47:29 -- accel/accel.sh@21 -- # val= 00:07:07.732 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.732 06:47:29 -- accel/accel.sh@21 -- # val=software 00:07:07.732 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.732 06:47:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.732 06:47:29 -- accel/accel.sh@21 -- # val=32 00:07:07.732 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.732 06:47:29 -- accel/accel.sh@21 -- # val=32 00:07:07.732 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.732 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.733 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.733 06:47:29 -- accel/accel.sh@21 -- # val=1 00:07:07.733 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.733 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.733 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.733 06:47:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.733 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.733 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.733 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.733 06:47:29 -- accel/accel.sh@21 -- # val=Yes 00:07:07.733 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.733 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.733 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.733 06:47:29 -- accel/accel.sh@21 -- # val= 00:07:07.733 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.733 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.733 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:07.733 06:47:29 -- accel/accel.sh@21 -- # val= 00:07:07.733 06:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.733 06:47:29 -- accel/accel.sh@20 -- # IFS=: 00:07:07.733 06:47:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.173 06:47:30 -- accel/accel.sh@21 -- # val= 00:07:09.173 06:47:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.173 06:47:30 -- accel/accel.sh@20 -- # IFS=: 00:07:09.173 06:47:30 -- accel/accel.sh@20 -- # read -r var val 00:07:09.173 06:47:30 -- accel/accel.sh@21 -- # val= 00:07:09.173 06:47:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.173 06:47:30 -- accel/accel.sh@20 -- # IFS=: 00:07:09.173 06:47:30 -- accel/accel.sh@20 -- # read -r var val 00:07:09.173 06:47:30 -- accel/accel.sh@21 -- # val= 00:07:09.173 06:47:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.173 06:47:30 -- accel/accel.sh@20 -- # IFS=: 00:07:09.173 06:47:30 -- accel/accel.sh@20 -- # read -r var val 00:07:09.173 06:47:30 -- accel/accel.sh@21 -- # val= 00:07:09.173 06:47:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.173 06:47:30 -- accel/accel.sh@20 -- # IFS=: 00:07:09.173 06:47:30 -- accel/accel.sh@20 -- # read -r var val 00:07:09.173 06:47:30 -- accel/accel.sh@21 -- # val= 00:07:09.173 06:47:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.173 06:47:30 -- accel/accel.sh@20 -- # IFS=: 00:07:09.173 06:47:30 -- accel/accel.sh@20 -- # read -r var val 00:07:09.173 06:47:30 -- accel/accel.sh@21 -- # val= 00:07:09.173 06:47:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.173 06:47:30 -- accel/accel.sh@20 -- # IFS=: 00:07:09.173 06:47:30 -- accel/accel.sh@20 -- # read -r var val 00:07:09.173 06:47:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.173 06:47:30 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:09.173 06:47:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.173 00:07:09.173 real 0m2.604s 00:07:09.173 user 0m2.355s 00:07:09.173 sys 0m0.258s 00:07:09.173 06:47:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.173 06:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:09.173 ************************************ 00:07:09.173 END TEST accel_crc32c 00:07:09.173 ************************************ 00:07:09.173 06:47:30 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:09.173 06:47:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:09.173 06:47:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.173 06:47:30 -- common/autotest_common.sh@10 -- # set +x 00:07:09.173 ************************************ 00:07:09.173 START TEST accel_crc32c_C2 00:07:09.173 ************************************ 00:07:09.173 06:47:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:09.173 06:47:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.173 06:47:30 -- accel/accel.sh@17 -- # local accel_module 00:07:09.173 06:47:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:09.173 06:47:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:09.173 06:47:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.173 06:47:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.173 06:47:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.173 06:47:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.173 06:47:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.173 06:47:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.173 06:47:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.173 06:47:30 -- accel/accel.sh@42 -- # jq -r . 00:07:09.173 [2024-12-15 06:47:30.401786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:09.173 [2024-12-15 06:47:30.401853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195001 ] 00:07:09.173 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.173 [2024-12-15 06:47:30.472810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.173 [2024-12-15 06:47:30.508508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.171 06:47:31 -- accel/accel.sh@18 -- # out=' 00:07:10.171 SPDK Configuration: 00:07:10.171 Core mask: 0x1 00:07:10.171 00:07:10.171 Accel Perf Configuration: 00:07:10.171 Workload Type: crc32c 00:07:10.171 CRC-32C seed: 0 00:07:10.171 Transfer size: 4096 bytes 00:07:10.171 Vector count 2 00:07:10.171 Module: software 00:07:10.171 Queue depth: 32 00:07:10.171 Allocate depth: 32 00:07:10.171 # threads/core: 1 00:07:10.171 Run time: 1 seconds 00:07:10.171 Verify: Yes 00:07:10.171 00:07:10.171 Running for 1 seconds... 00:07:10.171 00:07:10.171 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.171 ------------------------------------------------------------------------------------ 00:07:10.171 0,0 474528/s 3707 MiB/s 0 0 00:07:10.171 ==================================================================================== 00:07:10.171 Total 474528/s 1853 MiB/s 0 0' 00:07:10.171 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.171 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.171 06:47:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:10.171 06:47:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:10.171 06:47:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.171 06:47:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.171 06:47:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.171 06:47:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.171 06:47:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.171 06:47:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.171 06:47:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.171 06:47:31 -- accel/accel.sh@42 -- # jq -r . 00:07:10.171 [2024-12-15 06:47:31.699389] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:10.171 [2024-12-15 06:47:31.699462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195277 ] 00:07:10.171 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.171 [2024-12-15 06:47:31.767564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.171 [2024-12-15 06:47:31.801768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val= 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val= 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val=0x1 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val= 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val= 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val=crc32c 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val=0 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val= 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val=software 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val=32 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val=32 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val=1 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val=Yes 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val= 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:10.431 06:47:31 -- accel/accel.sh@21 -- # val= 00:07:10.431 06:47:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # IFS=: 00:07:10.431 06:47:31 -- accel/accel.sh@20 -- # read -r var val 00:07:11.369 06:47:32 -- accel/accel.sh@21 -- # val= 00:07:11.369 06:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.369 06:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:11.369 06:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.369 06:47:32 -- accel/accel.sh@21 -- # val= 00:07:11.369 06:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.369 06:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:11.369 06:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.369 06:47:32 -- accel/accel.sh@21 -- # val= 00:07:11.369 06:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.369 06:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:11.369 06:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.369 06:47:32 -- accel/accel.sh@21 -- # val= 00:07:11.369 06:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.369 06:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:11.369 06:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.369 06:47:32 -- accel/accel.sh@21 -- # val= 00:07:11.369 06:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.369 06:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:11.369 06:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.369 06:47:32 -- accel/accel.sh@21 -- # val= 00:07:11.369 06:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.369 06:47:32 -- accel/accel.sh@20 -- # IFS=: 00:07:11.369 06:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.369 06:47:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.369 06:47:32 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:11.369 06:47:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.369 00:07:11.369 real 0m2.599s 00:07:11.369 user 0m2.351s 00:07:11.369 sys 0m0.255s 00:07:11.369 06:47:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.369 06:47:32 -- common/autotest_common.sh@10 -- # set +x 00:07:11.369 ************************************ 00:07:11.369 END TEST accel_crc32c_C2 00:07:11.369 ************************************ 00:07:11.629 06:47:33 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:11.629 06:47:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:11.629 06:47:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.629 06:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:11.629 ************************************ 00:07:11.629 START TEST accel_copy 00:07:11.629 ************************************ 00:07:11.629 06:47:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:07:11.629 06:47:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.629 06:47:33 -- accel/accel.sh@17 -- # local accel_module 00:07:11.629 06:47:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:11.629 06:47:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:11.629 06:47:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.629 06:47:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.629 06:47:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.629 06:47:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.629 06:47:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.629 06:47:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.629 06:47:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.629 06:47:33 -- accel/accel.sh@42 -- # jq -r . 00:07:11.629 [2024-12-15 06:47:33.033267] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:11.629 [2024-12-15 06:47:33.033322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195565 ] 00:07:11.629 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.629 [2024-12-15 06:47:33.099239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.629 [2024-12-15 06:47:33.134300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.008 06:47:34 -- accel/accel.sh@18 -- # out=' 00:07:13.008 SPDK Configuration: 00:07:13.008 Core mask: 0x1 00:07:13.008 00:07:13.008 Accel Perf Configuration: 00:07:13.008 Workload Type: copy 00:07:13.008 Transfer size: 4096 bytes 00:07:13.008 Vector count 1 00:07:13.008 Module: software 00:07:13.008 Queue depth: 32 00:07:13.008 Allocate depth: 32 00:07:13.008 # threads/core: 1 00:07:13.008 Run time: 1 seconds 00:07:13.008 Verify: Yes 00:07:13.008 00:07:13.008 Running for 1 seconds... 00:07:13.008 00:07:13.008 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.008 ------------------------------------------------------------------------------------ 00:07:13.008 0,0 441184/s 1723 MiB/s 0 0 00:07:13.008 ==================================================================================== 00:07:13.008 Total 441184/s 1723 MiB/s 0 0' 00:07:13.008 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.008 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.008 06:47:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:13.008 06:47:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:13.008 06:47:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.008 06:47:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.008 06:47:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.008 06:47:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.008 06:47:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.008 06:47:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.008 06:47:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.008 06:47:34 -- accel/accel.sh@42 -- # jq -r . 00:07:13.008 [2024-12-15 06:47:34.325139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.009 [2024-12-15 06:47:34.325209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195833 ] 00:07:13.009 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.009 [2024-12-15 06:47:34.393650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.009 [2024-12-15 06:47:34.427615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val= 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val= 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val=0x1 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val= 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val= 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val=copy 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val= 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val=software 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val=32 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val=32 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val=1 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val=Yes 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val= 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:13.009 06:47:34 -- accel/accel.sh@21 -- # val= 00:07:13.009 06:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # IFS=: 00:07:13.009 06:47:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.388 06:47:35 -- accel/accel.sh@21 -- # val= 00:07:14.388 06:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.388 06:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:14.388 06:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:14.388 06:47:35 -- accel/accel.sh@21 -- # val= 00:07:14.388 06:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.388 06:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:14.388 06:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:14.388 06:47:35 -- accel/accel.sh@21 -- # val= 00:07:14.388 06:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.388 06:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:14.388 06:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:14.388 06:47:35 -- accel/accel.sh@21 -- # val= 00:07:14.388 06:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.388 06:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:14.388 06:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:14.388 06:47:35 -- accel/accel.sh@21 -- # val= 00:07:14.388 06:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.388 06:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:14.388 06:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:14.388 06:47:35 -- accel/accel.sh@21 -- # val= 00:07:14.388 06:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.388 06:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:14.388 06:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:14.388 06:47:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.388 06:47:35 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:14.388 06:47:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.388 00:07:14.388 real 0m2.581s 00:07:14.388 user 0m2.333s 00:07:14.388 sys 0m0.256s 00:07:14.388 06:47:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.388 06:47:35 -- common/autotest_common.sh@10 -- # set +x 00:07:14.388 ************************************ 00:07:14.388 END TEST accel_copy 00:07:14.388 ************************************ 00:07:14.388 06:47:35 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:14.388 06:47:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:14.388 06:47:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.388 06:47:35 -- common/autotest_common.sh@10 -- # set +x 00:07:14.388 ************************************ 00:07:14.388 START TEST accel_fill 00:07:14.388 ************************************ 00:07:14.388 06:47:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:14.388 06:47:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.388 06:47:35 -- accel/accel.sh@17 -- # local accel_module 00:07:14.388 06:47:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:14.388 06:47:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:14.388 06:47:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.388 06:47:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.388 06:47:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.388 06:47:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.388 06:47:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.388 06:47:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.388 06:47:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.388 06:47:35 -- accel/accel.sh@42 -- # jq -r . 00:07:14.388 [2024-12-15 06:47:35.663316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:14.388 [2024-12-15 06:47:35.663382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196052 ] 00:07:14.388 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.388 [2024-12-15 06:47:35.728150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.388 [2024-12-15 06:47:35.763435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.324 06:47:36 -- accel/accel.sh@18 -- # out=' 00:07:15.324 SPDK Configuration: 00:07:15.324 Core mask: 0x1 00:07:15.324 00:07:15.324 Accel Perf Configuration: 00:07:15.324 Workload Type: fill 00:07:15.324 Fill pattern: 0x80 00:07:15.324 Transfer size: 4096 bytes 00:07:15.324 Vector count 1 00:07:15.324 Module: software 00:07:15.324 Queue depth: 64 00:07:15.324 Allocate depth: 64 00:07:15.324 # threads/core: 1 00:07:15.324 Run time: 1 seconds 00:07:15.324 Verify: Yes 00:07:15.324 00:07:15.324 Running for 1 seconds... 00:07:15.324 00:07:15.324 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.324 ------------------------------------------------------------------------------------ 00:07:15.324 0,0 709312/s 2770 MiB/s 0 0 00:07:15.324 ==================================================================================== 00:07:15.324 Total 709312/s 2770 MiB/s 0 0' 00:07:15.324 06:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:15.324 06:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:15.324 06:47:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:15.324 06:47:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:15.324 06:47:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.324 06:47:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.324 06:47:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.324 06:47:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.324 06:47:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.324 06:47:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.324 06:47:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.324 06:47:36 -- accel/accel.sh@42 -- # jq -r . 00:07:15.324 [2024-12-15 06:47:36.957394] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.324 [2024-12-15 06:47:36.957484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196202 ] 00:07:15.584 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.584 [2024-12-15 06:47:37.027595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.584 [2024-12-15 06:47:37.062389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val= 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val= 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val=0x1 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val= 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val= 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val=fill 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val=0x80 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val= 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val=software 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val=64 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val=64 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val=1 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val=Yes 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val= 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.584 06:47:37 -- accel/accel.sh@21 -- # val= 00:07:15.584 06:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.584 06:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:16.965 06:47:38 -- accel/accel.sh@21 -- # val= 00:07:16.965 06:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.965 06:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.965 06:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.965 06:47:38 -- accel/accel.sh@21 -- # val= 00:07:16.965 06:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.965 06:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.965 06:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.965 06:47:38 -- accel/accel.sh@21 -- # val= 00:07:16.965 06:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.965 06:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.965 06:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.965 06:47:38 -- accel/accel.sh@21 -- # val= 00:07:16.965 06:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.965 06:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.965 06:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.965 06:47:38 -- accel/accel.sh@21 -- # val= 00:07:16.965 06:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.965 06:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.965 06:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.965 06:47:38 -- accel/accel.sh@21 -- # val= 00:07:16.965 06:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.965 06:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.965 06:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.965 06:47:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.965 06:47:38 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:16.965 06:47:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.965 00:07:16.965 real 0m2.586s 00:07:16.965 user 0m2.338s 00:07:16.965 sys 0m0.258s 00:07:16.965 06:47:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.965 06:47:38 -- common/autotest_common.sh@10 -- # set +x 00:07:16.965 ************************************ 00:07:16.965 END TEST accel_fill 00:07:16.965 ************************************ 00:07:16.965 06:47:38 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:16.965 06:47:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:16.965 06:47:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.965 06:47:38 -- common/autotest_common.sh@10 -- # set +x 00:07:16.965 ************************************ 00:07:16.965 START TEST accel_copy_crc32c 00:07:16.965 ************************************ 00:07:16.965 06:47:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:16.965 06:47:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.965 06:47:38 -- accel/accel.sh@17 -- # local accel_module 00:07:16.965 06:47:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:16.965 06:47:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.965 06:47:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:16.965 06:47:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.965 06:47:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.965 06:47:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.965 06:47:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.965 06:47:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.965 06:47:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.965 06:47:38 -- accel/accel.sh@42 -- # jq -r . 00:07:16.965 [2024-12-15 06:47:38.307091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.965 [2024-12-15 06:47:38.307163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196424 ] 00:07:16.965 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.965 [2024-12-15 06:47:38.375260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.965 [2024-12-15 06:47:38.410516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.344 06:47:39 -- accel/accel.sh@18 -- # out=' 00:07:18.344 SPDK Configuration: 00:07:18.344 Core mask: 0x1 00:07:18.344 00:07:18.345 Accel Perf Configuration: 00:07:18.345 Workload Type: copy_crc32c 00:07:18.345 CRC-32C seed: 0 00:07:18.345 Vector size: 4096 bytes 00:07:18.345 Transfer size: 4096 bytes 00:07:18.345 Vector count 1 00:07:18.345 Module: software 00:07:18.345 Queue depth: 32 00:07:18.345 Allocate depth: 32 00:07:18.345 # threads/core: 1 00:07:18.345 Run time: 1 seconds 00:07:18.345 Verify: Yes 00:07:18.345 00:07:18.345 Running for 1 seconds... 00:07:18.345 00:07:18.345 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.345 ------------------------------------------------------------------------------------ 00:07:18.345 0,0 349440/s 1365 MiB/s 0 0 00:07:18.345 ==================================================================================== 00:07:18.345 Total 349440/s 1365 MiB/s 0 0' 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:18.345 06:47:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:18.345 06:47:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.345 06:47:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.345 06:47:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.345 06:47:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.345 06:47:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.345 06:47:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.345 06:47:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.345 06:47:39 -- accel/accel.sh@42 -- # jq -r . 00:07:18.345 [2024-12-15 06:47:39.604998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:18.345 [2024-12-15 06:47:39.605092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196693 ] 00:07:18.345 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.345 [2024-12-15 06:47:39.675455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.345 [2024-12-15 06:47:39.709269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val= 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val= 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val=0x1 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val= 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val= 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val=0 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val= 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val=software 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val=32 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val=32 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val=1 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val=Yes 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val= 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:18.345 06:47:39 -- accel/accel.sh@21 -- # val= 00:07:18.345 06:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:18.345 06:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:19.281 06:47:40 -- accel/accel.sh@21 -- # val= 00:07:19.281 06:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.281 06:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.281 06:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.281 06:47:40 -- accel/accel.sh@21 -- # val= 00:07:19.281 06:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.281 06:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.281 06:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.281 06:47:40 -- accel/accel.sh@21 -- # val= 00:07:19.281 06:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.281 06:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.281 06:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.281 06:47:40 -- accel/accel.sh@21 -- # val= 00:07:19.281 06:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.281 06:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.281 06:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.281 06:47:40 -- accel/accel.sh@21 -- # val= 00:07:19.281 06:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.281 06:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.281 06:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.281 06:47:40 -- accel/accel.sh@21 -- # val= 00:07:19.281 06:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.281 06:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:19.281 06:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:19.281 06:47:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.281 06:47:40 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:19.281 06:47:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.281 00:07:19.281 real 0m2.599s 00:07:19.281 user 0m2.344s 00:07:19.281 sys 0m0.264s 00:07:19.281 06:47:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.281 06:47:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.281 ************************************ 00:07:19.281 END TEST accel_copy_crc32c 00:07:19.281 ************************************ 00:07:19.281 06:47:40 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:19.281 06:47:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:19.281 06:47:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.281 06:47:40 -- common/autotest_common.sh@10 -- # set +x 00:07:19.541 ************************************ 00:07:19.541 START TEST accel_copy_crc32c_C2 00:07:19.541 ************************************ 00:07:19.541 06:47:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:19.541 06:47:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.541 06:47:40 -- accel/accel.sh@17 -- # local accel_module 00:07:19.541 06:47:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:19.541 06:47:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:19.541 06:47:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.541 06:47:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.541 06:47:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.541 06:47:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.541 06:47:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.541 06:47:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.541 06:47:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.541 06:47:40 -- accel/accel.sh@42 -- # jq -r . 00:07:19.541 [2024-12-15 06:47:40.953323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.541 [2024-12-15 06:47:40.953392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196976 ] 00:07:19.541 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.541 [2024-12-15 06:47:41.023471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.541 [2024-12-15 06:47:41.058664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.919 06:47:42 -- accel/accel.sh@18 -- # out=' 00:07:20.919 SPDK Configuration: 00:07:20.919 Core mask: 0x1 00:07:20.919 00:07:20.919 Accel Perf Configuration: 00:07:20.919 Workload Type: copy_crc32c 00:07:20.919 CRC-32C seed: 0 00:07:20.919 Vector size: 4096 bytes 00:07:20.919 Transfer size: 8192 bytes 00:07:20.919 Vector count 2 00:07:20.919 Module: software 00:07:20.919 Queue depth: 32 00:07:20.919 Allocate depth: 32 00:07:20.919 # threads/core: 1 00:07:20.919 Run time: 1 seconds 00:07:20.919 Verify: Yes 00:07:20.919 00:07:20.919 Running for 1 seconds... 00:07:20.919 00:07:20.919 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.919 ------------------------------------------------------------------------------------ 00:07:20.919 0,0 246112/s 1922 MiB/s 0 0 00:07:20.919 ==================================================================================== 00:07:20.919 Total 246112/s 961 MiB/s 0 0' 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:20.919 06:47:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:20.919 06:47:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.919 06:47:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.919 06:47:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.919 06:47:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.919 06:47:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.919 06:47:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.919 06:47:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.919 06:47:42 -- accel/accel.sh@42 -- # jq -r . 00:07:20.919 [2024-12-15 06:47:42.250135] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.919 [2024-12-15 06:47:42.250214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197248 ] 00:07:20.919 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.919 [2024-12-15 06:47:42.319349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.919 [2024-12-15 06:47:42.353150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val= 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val= 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val=0x1 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val= 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val= 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val=0 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val= 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val=software 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val=32 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val=32 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val=1 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val=Yes 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val= 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.919 06:47:42 -- accel/accel.sh@21 -- # val= 00:07:20.919 06:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.919 06:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:22.297 06:47:43 -- accel/accel.sh@21 -- # val= 00:07:22.297 06:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.297 06:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:22.297 06:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:22.297 06:47:43 -- accel/accel.sh@21 -- # val= 00:07:22.297 06:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.297 06:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:22.297 06:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:22.297 06:47:43 -- accel/accel.sh@21 -- # val= 00:07:22.297 06:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.297 06:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:22.297 06:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:22.297 06:47:43 -- accel/accel.sh@21 -- # val= 00:07:22.297 06:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.297 06:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:22.297 06:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:22.297 06:47:43 -- accel/accel.sh@21 -- # val= 00:07:22.297 06:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.297 06:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:22.297 06:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:22.297 06:47:43 -- accel/accel.sh@21 -- # val= 00:07:22.297 06:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.297 06:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:22.297 06:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:22.297 06:47:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.297 06:47:43 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:22.297 06:47:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.297 00:07:22.297 real 0m2.597s 00:07:22.297 user 0m2.334s 00:07:22.297 sys 0m0.271s 00:07:22.297 06:47:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.297 06:47:43 -- common/autotest_common.sh@10 -- # set +x 00:07:22.297 ************************************ 00:07:22.297 END TEST accel_copy_crc32c_C2 00:07:22.297 ************************************ 00:07:22.297 06:47:43 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:22.297 06:47:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:22.297 06:47:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.297 06:47:43 -- common/autotest_common.sh@10 -- # set +x 00:07:22.297 ************************************ 00:07:22.297 START TEST accel_dualcast 00:07:22.297 ************************************ 00:07:22.297 06:47:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:22.297 06:47:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.297 06:47:43 -- accel/accel.sh@17 -- # local accel_module 00:07:22.297 06:47:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:22.297 06:47:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.297 06:47:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:22.297 06:47:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.297 06:47:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.297 06:47:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.297 06:47:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.297 06:47:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.297 06:47:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.297 06:47:43 -- accel/accel.sh@42 -- # jq -r . 00:07:22.297 [2024-12-15 06:47:43.600554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:22.297 [2024-12-15 06:47:43.600624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197531 ] 00:07:22.297 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.297 [2024-12-15 06:47:43.669100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.297 [2024-12-15 06:47:43.703753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.234 06:47:44 -- accel/accel.sh@18 -- # out=' 00:07:23.234 SPDK Configuration: 00:07:23.234 Core mask: 0x1 00:07:23.234 00:07:23.234 Accel Perf Configuration: 00:07:23.234 Workload Type: dualcast 00:07:23.234 Transfer size: 4096 bytes 00:07:23.234 Vector count 1 00:07:23.234 Module: software 00:07:23.234 Queue depth: 32 00:07:23.234 Allocate depth: 32 00:07:23.234 # threads/core: 1 00:07:23.234 Run time: 1 seconds 00:07:23.234 Verify: Yes 00:07:23.234 00:07:23.234 Running for 1 seconds... 00:07:23.234 00:07:23.234 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.234 ------------------------------------------------------------------------------------ 00:07:23.234 0,0 533024/s 2082 MiB/s 0 0 00:07:23.234 ==================================================================================== 00:07:23.234 Total 533024/s 2082 MiB/s 0 0' 00:07:23.234 06:47:44 -- accel/accel.sh@20 -- # IFS=: 00:07:23.234 06:47:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.234 06:47:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:23.234 06:47:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:23.234 06:47:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.234 06:47:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.234 06:47:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.234 06:47:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.493 06:47:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.493 06:47:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.493 06:47:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.493 06:47:44 -- accel/accel.sh@42 -- # jq -r . 00:07:23.493 [2024-12-15 06:47:44.894848] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.493 [2024-12-15 06:47:44.894915] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197738 ] 00:07:23.493 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.493 [2024-12-15 06:47:44.963267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.493 [2024-12-15 06:47:44.997716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val= 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val= 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val=0x1 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val= 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val= 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val=dualcast 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val= 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val=software 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val=32 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val=32 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val=1 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val=Yes 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val= 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 06:47:45 -- accel/accel.sh@21 -- # val= 00:07:23.493 06:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 06:47:45 -- accel/accel.sh@20 -- # read -r var val 00:07:24.871 06:47:46 -- accel/accel.sh@21 -- # val= 00:07:24.871 06:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.871 06:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:24.871 06:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:24.871 06:47:46 -- accel/accel.sh@21 -- # val= 00:07:24.871 06:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.871 06:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:24.871 06:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:24.871 06:47:46 -- accel/accel.sh@21 -- # val= 00:07:24.871 06:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.871 06:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:24.871 06:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:24.871 06:47:46 -- accel/accel.sh@21 -- # val= 00:07:24.871 06:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.871 06:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:24.871 06:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:24.871 06:47:46 -- accel/accel.sh@21 -- # val= 00:07:24.871 06:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.871 06:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:24.871 06:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:24.871 06:47:46 -- accel/accel.sh@21 -- # val= 00:07:24.871 06:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.871 06:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:24.871 06:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:24.871 06:47:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.871 06:47:46 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:24.871 06:47:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.871 00:07:24.871 real 0m2.596s 00:07:24.871 user 0m2.352s 00:07:24.871 sys 0m0.252s 00:07:24.871 06:47:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.871 06:47:46 -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 ************************************ 00:07:24.871 END TEST accel_dualcast 00:07:24.871 ************************************ 00:07:24.871 06:47:46 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:24.871 06:47:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:24.871 06:47:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.871 06:47:46 -- common/autotest_common.sh@10 -- # set +x 00:07:24.871 ************************************ 00:07:24.871 START TEST accel_compare 00:07:24.871 ************************************ 00:07:24.871 06:47:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:24.871 06:47:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.871 06:47:46 -- accel/accel.sh@17 -- # local accel_module 00:07:24.871 06:47:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:24.871 06:47:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.871 06:47:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:24.871 06:47:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.871 06:47:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.872 06:47:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.872 06:47:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.872 06:47:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.872 06:47:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.872 06:47:46 -- accel/accel.sh@42 -- # jq -r . 00:07:24.872 [2024-12-15 06:47:46.246374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:24.872 [2024-12-15 06:47:46.246444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197931 ] 00:07:24.872 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.872 [2024-12-15 06:47:46.316538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.872 [2024-12-15 06:47:46.351247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.250 06:47:47 -- accel/accel.sh@18 -- # out=' 00:07:26.250 SPDK Configuration: 00:07:26.250 Core mask: 0x1 00:07:26.250 00:07:26.250 Accel Perf Configuration: 00:07:26.250 Workload Type: compare 00:07:26.250 Transfer size: 4096 bytes 00:07:26.250 Vector count 1 00:07:26.250 Module: software 00:07:26.250 Queue depth: 32 00:07:26.250 Allocate depth: 32 00:07:26.250 # threads/core: 1 00:07:26.250 Run time: 1 seconds 00:07:26.250 Verify: Yes 00:07:26.250 00:07:26.250 Running for 1 seconds... 00:07:26.250 00:07:26.250 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.250 ------------------------------------------------------------------------------------ 00:07:26.250 0,0 652960/s 2550 MiB/s 0 0 00:07:26.250 ==================================================================================== 00:07:26.250 Total 652960/s 2550 MiB/s 0 0' 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:26.250 06:47:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:26.250 06:47:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.250 06:47:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.250 06:47:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.250 06:47:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.250 06:47:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.250 06:47:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.250 06:47:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.250 06:47:47 -- accel/accel.sh@42 -- # jq -r . 00:07:26.250 [2024-12-15 06:47:47.541407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:26.250 [2024-12-15 06:47:47.541477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198109 ] 00:07:26.250 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.250 [2024-12-15 06:47:47.610864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.250 [2024-12-15 06:47:47.645512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val= 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val= 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val=0x1 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val= 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val= 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val=compare 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val= 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val=software 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val=32 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val=32 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val=1 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val=Yes 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val= 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.250 06:47:47 -- accel/accel.sh@21 -- # val= 00:07:26.250 06:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:26.250 06:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:27.187 06:47:48 -- accel/accel.sh@21 -- # val= 00:07:27.187 06:47:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.187 06:47:48 -- accel/accel.sh@20 -- # IFS=: 00:07:27.187 06:47:48 -- accel/accel.sh@20 -- # read -r var val 00:07:27.187 06:47:48 -- accel/accel.sh@21 -- # val= 00:07:27.187 06:47:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.187 06:47:48 -- accel/accel.sh@20 -- # IFS=: 00:07:27.187 06:47:48 -- accel/accel.sh@20 -- # read -r var val 00:07:27.187 06:47:48 -- accel/accel.sh@21 -- # val= 00:07:27.187 06:47:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.187 06:47:48 -- accel/accel.sh@20 -- # IFS=: 00:07:27.187 06:47:48 -- accel/accel.sh@20 -- # read -r var val 00:07:27.187 06:47:48 -- accel/accel.sh@21 -- # val= 00:07:27.187 06:47:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.187 06:47:48 -- accel/accel.sh@20 -- # IFS=: 00:07:27.187 06:47:48 -- accel/accel.sh@20 -- # read -r var val 00:07:27.187 06:47:48 -- accel/accel.sh@21 -- # val= 00:07:27.187 06:47:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.187 06:47:48 -- accel/accel.sh@20 -- # IFS=: 00:07:27.187 06:47:48 -- accel/accel.sh@20 -- # read -r var val 00:07:27.187 06:47:48 -- accel/accel.sh@21 -- # val= 00:07:27.187 06:47:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.187 06:47:48 -- accel/accel.sh@20 -- # IFS=: 00:07:27.187 06:47:48 -- accel/accel.sh@20 -- # read -r var val 00:07:27.187 06:47:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.187 06:47:48 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:27.187 06:47:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.187 00:07:27.187 real 0m2.598s 00:07:27.187 user 0m2.342s 00:07:27.187 sys 0m0.263s 00:07:27.187 06:47:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.187 06:47:48 -- common/autotest_common.sh@10 -- # set +x 00:07:27.187 ************************************ 00:07:27.187 END TEST accel_compare 00:07:27.187 ************************************ 00:07:27.446 06:47:48 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:27.446 06:47:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:27.446 06:47:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.446 06:47:48 -- common/autotest_common.sh@10 -- # set +x 00:07:27.446 ************************************ 00:07:27.446 START TEST accel_xor 00:07:27.446 ************************************ 00:07:27.446 06:47:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:27.446 06:47:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.446 06:47:48 -- accel/accel.sh@17 -- # local accel_module 00:07:27.446 06:47:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:27.446 06:47:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:27.446 06:47:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.446 06:47:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.446 06:47:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.446 06:47:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.446 06:47:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.446 06:47:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.446 06:47:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.446 06:47:48 -- accel/accel.sh@42 -- # jq -r . 00:07:27.446 [2024-12-15 06:47:48.890248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.446 [2024-12-15 06:47:48.890330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198391 ] 00:07:27.446 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.446 [2024-12-15 06:47:48.962087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.446 [2024-12-15 06:47:48.997341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.824 06:47:50 -- accel/accel.sh@18 -- # out=' 00:07:28.824 SPDK Configuration: 00:07:28.824 Core mask: 0x1 00:07:28.824 00:07:28.824 Accel Perf Configuration: 00:07:28.824 Workload Type: xor 00:07:28.824 Source buffers: 2 00:07:28.824 Transfer size: 4096 bytes 00:07:28.824 Vector count 1 00:07:28.824 Module: software 00:07:28.824 Queue depth: 32 00:07:28.824 Allocate depth: 32 00:07:28.824 # threads/core: 1 00:07:28.824 Run time: 1 seconds 00:07:28.824 Verify: Yes 00:07:28.824 00:07:28.824 Running for 1 seconds... 00:07:28.824 00:07:28.824 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.824 ------------------------------------------------------------------------------------ 00:07:28.824 0,0 501408/s 1958 MiB/s 0 0 00:07:28.824 ==================================================================================== 00:07:28.824 Total 501408/s 1958 MiB/s 0 0' 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:28.824 06:47:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:28.824 06:47:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.824 06:47:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.824 06:47:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.824 06:47:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.824 06:47:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.824 06:47:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.824 06:47:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.824 06:47:50 -- accel/accel.sh@42 -- # jq -r . 00:07:28.824 [2024-12-15 06:47:50.190053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.824 [2024-12-15 06:47:50.190123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198659 ] 00:07:28.824 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.824 [2024-12-15 06:47:50.260143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.824 [2024-12-15 06:47:50.294641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val= 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val= 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val=0x1 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val= 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val= 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val=xor 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val=2 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val= 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val=software 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val=32 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val=32 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val=1 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val=Yes 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val= 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.824 06:47:50 -- accel/accel.sh@21 -- # val= 00:07:28.824 06:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.824 06:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.201 06:47:51 -- accel/accel.sh@21 -- # val= 00:07:30.201 06:47:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.201 06:47:51 -- accel/accel.sh@20 -- # IFS=: 00:07:30.201 06:47:51 -- accel/accel.sh@20 -- # read -r var val 00:07:30.201 06:47:51 -- accel/accel.sh@21 -- # val= 00:07:30.202 06:47:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.202 06:47:51 -- accel/accel.sh@20 -- # IFS=: 00:07:30.202 06:47:51 -- accel/accel.sh@20 -- # read -r var val 00:07:30.202 06:47:51 -- accel/accel.sh@21 -- # val= 00:07:30.202 06:47:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.202 06:47:51 -- accel/accel.sh@20 -- # IFS=: 00:07:30.202 06:47:51 -- accel/accel.sh@20 -- # read -r var val 00:07:30.202 06:47:51 -- accel/accel.sh@21 -- # val= 00:07:30.202 06:47:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.202 06:47:51 -- accel/accel.sh@20 -- # IFS=: 00:07:30.202 06:47:51 -- accel/accel.sh@20 -- # read -r var val 00:07:30.202 06:47:51 -- accel/accel.sh@21 -- # val= 00:07:30.202 06:47:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.202 06:47:51 -- accel/accel.sh@20 -- # IFS=: 00:07:30.202 06:47:51 -- accel/accel.sh@20 -- # read -r var val 00:07:30.202 06:47:51 -- accel/accel.sh@21 -- # val= 00:07:30.202 06:47:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.202 06:47:51 -- accel/accel.sh@20 -- # IFS=: 00:07:30.202 06:47:51 -- accel/accel.sh@20 -- # read -r var val 00:07:30.202 06:47:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.202 06:47:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:30.202 06:47:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.202 00:07:30.202 real 0m2.603s 00:07:30.202 user 0m2.357s 00:07:30.202 sys 0m0.254s 00:07:30.202 06:47:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.202 06:47:51 -- common/autotest_common.sh@10 -- # set +x 00:07:30.202 ************************************ 00:07:30.202 END TEST accel_xor 00:07:30.202 ************************************ 00:07:30.202 06:47:51 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:30.202 06:47:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:30.202 06:47:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.202 06:47:51 -- common/autotest_common.sh@10 -- # set +x 00:07:30.202 ************************************ 00:07:30.202 START TEST accel_xor 00:07:30.202 ************************************ 00:07:30.202 06:47:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:30.202 06:47:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.202 06:47:51 -- accel/accel.sh@17 -- # local accel_module 00:07:30.202 06:47:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:30.202 06:47:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.202 06:47:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:30.202 06:47:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.202 06:47:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.202 06:47:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.202 06:47:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.202 06:47:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.202 06:47:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.202 06:47:51 -- accel/accel.sh@42 -- # jq -r . 00:07:30.202 [2024-12-15 06:47:51.542710] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:30.202 [2024-12-15 06:47:51.542781] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198946 ] 00:07:30.202 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.202 [2024-12-15 06:47:51.611239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.202 [2024-12-15 06:47:51.645734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.580 06:47:52 -- accel/accel.sh@18 -- # out=' 00:07:31.580 SPDK Configuration: 00:07:31.580 Core mask: 0x1 00:07:31.580 00:07:31.580 Accel Perf Configuration: 00:07:31.580 Workload Type: xor 00:07:31.580 Source buffers: 3 00:07:31.580 Transfer size: 4096 bytes 00:07:31.580 Vector count 1 00:07:31.580 Module: software 00:07:31.580 Queue depth: 32 00:07:31.580 Allocate depth: 32 00:07:31.580 # threads/core: 1 00:07:31.580 Run time: 1 seconds 00:07:31.580 Verify: Yes 00:07:31.580 00:07:31.580 Running for 1 seconds... 00:07:31.580 00:07:31.580 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.580 ------------------------------------------------------------------------------------ 00:07:31.580 0,0 468736/s 1831 MiB/s 0 0 00:07:31.580 ==================================================================================== 00:07:31.580 Total 468736/s 1831 MiB/s 0 0' 00:07:31.580 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.580 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.580 06:47:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:31.580 06:47:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:31.580 06:47:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.580 06:47:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.580 06:47:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.580 06:47:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.580 06:47:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.580 06:47:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.580 06:47:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.580 06:47:52 -- accel/accel.sh@42 -- # jq -r . 00:07:31.580 [2024-12-15 06:47:52.837476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.580 [2024-12-15 06:47:52.837546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199214 ] 00:07:31.580 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.580 [2024-12-15 06:47:52.905827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.580 [2024-12-15 06:47:52.939815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.580 06:47:52 -- accel/accel.sh@21 -- # val= 00:07:31.580 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.580 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.580 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.580 06:47:52 -- accel/accel.sh@21 -- # val= 00:07:31.580 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.580 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.580 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.580 06:47:52 -- accel/accel.sh@21 -- # val=0x1 00:07:31.580 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.580 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.580 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.580 06:47:52 -- accel/accel.sh@21 -- # val= 00:07:31.580 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.580 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.580 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.580 06:47:52 -- accel/accel.sh@21 -- # val= 00:07:31.580 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.580 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.580 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.580 06:47:52 -- accel/accel.sh@21 -- # val=xor 00:07:31.581 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.581 06:47:52 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.581 06:47:52 -- accel/accel.sh@21 -- # val=3 00:07:31.581 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.581 06:47:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.581 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.581 06:47:52 -- accel/accel.sh@21 -- # val= 00:07:31.581 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.581 06:47:52 -- accel/accel.sh@21 -- # val=software 00:07:31.581 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.581 06:47:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.581 06:47:52 -- accel/accel.sh@21 -- # val=32 00:07:31.581 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.581 06:47:52 -- accel/accel.sh@21 -- # val=32 00:07:31.581 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.581 06:47:52 -- accel/accel.sh@21 -- # val=1 00:07:31.581 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.581 06:47:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.581 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.581 06:47:52 -- accel/accel.sh@21 -- # val=Yes 00:07:31.581 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.581 06:47:52 -- accel/accel.sh@21 -- # val= 00:07:31.581 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.581 06:47:52 -- accel/accel.sh@21 -- # val= 00:07:31.581 06:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:31.581 06:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.517 06:47:54 -- accel/accel.sh@21 -- # val= 00:07:32.517 06:47:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.517 06:47:54 -- accel/accel.sh@20 -- # IFS=: 00:07:32.517 06:47:54 -- accel/accel.sh@20 -- # read -r var val 00:07:32.517 06:47:54 -- accel/accel.sh@21 -- # val= 00:07:32.517 06:47:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.517 06:47:54 -- accel/accel.sh@20 -- # IFS=: 00:07:32.517 06:47:54 -- accel/accel.sh@20 -- # read -r var val 00:07:32.517 06:47:54 -- accel/accel.sh@21 -- # val= 00:07:32.517 06:47:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.517 06:47:54 -- accel/accel.sh@20 -- # IFS=: 00:07:32.517 06:47:54 -- accel/accel.sh@20 -- # read -r var val 00:07:32.517 06:47:54 -- accel/accel.sh@21 -- # val= 00:07:32.517 06:47:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.518 06:47:54 -- accel/accel.sh@20 -- # IFS=: 00:07:32.518 06:47:54 -- accel/accel.sh@20 -- # read -r var val 00:07:32.518 06:47:54 -- accel/accel.sh@21 -- # val= 00:07:32.518 06:47:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.518 06:47:54 -- accel/accel.sh@20 -- # IFS=: 00:07:32.518 06:47:54 -- accel/accel.sh@20 -- # read -r var val 00:07:32.518 06:47:54 -- accel/accel.sh@21 -- # val= 00:07:32.518 06:47:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.518 06:47:54 -- accel/accel.sh@20 -- # IFS=: 00:07:32.518 06:47:54 -- accel/accel.sh@20 -- # read -r var val 00:07:32.518 06:47:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.518 06:47:54 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:32.518 06:47:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.518 00:07:32.518 real 0m2.597s 00:07:32.518 user 0m2.346s 00:07:32.518 sys 0m0.259s 00:07:32.518 06:47:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.518 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:07:32.518 ************************************ 00:07:32.518 END TEST accel_xor 00:07:32.518 ************************************ 00:07:32.518 06:47:54 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:32.518 06:47:54 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:32.518 06:47:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.518 06:47:54 -- common/autotest_common.sh@10 -- # set +x 00:07:32.777 ************************************ 00:07:32.777 START TEST accel_dif_verify 00:07:32.777 ************************************ 00:07:32.777 06:47:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:32.777 06:47:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.777 06:47:54 -- accel/accel.sh@17 -- # local accel_module 00:07:32.777 06:47:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:32.777 06:47:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:32.777 06:47:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.777 06:47:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.777 06:47:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.777 06:47:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.777 06:47:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.777 06:47:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.777 06:47:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.777 06:47:54 -- accel/accel.sh@42 -- # jq -r . 00:07:32.777 [2024-12-15 06:47:54.188003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.777 [2024-12-15 06:47:54.188089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199447 ] 00:07:32.777 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.777 [2024-12-15 06:47:54.258848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.777 [2024-12-15 06:47:54.293506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.153 06:47:55 -- accel/accel.sh@18 -- # out=' 00:07:34.153 SPDK Configuration: 00:07:34.153 Core mask: 0x1 00:07:34.153 00:07:34.153 Accel Perf Configuration: 00:07:34.153 Workload Type: dif_verify 00:07:34.153 Vector size: 4096 bytes 00:07:34.153 Transfer size: 4096 bytes 00:07:34.153 Block size: 512 bytes 00:07:34.153 Metadata size: 8 bytes 00:07:34.153 Vector count 1 00:07:34.153 Module: software 00:07:34.153 Queue depth: 32 00:07:34.153 Allocate depth: 32 00:07:34.153 # threads/core: 1 00:07:34.153 Run time: 1 seconds 00:07:34.153 Verify: No 00:07:34.153 00:07:34.153 Running for 1 seconds... 00:07:34.153 00:07:34.153 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.153 ------------------------------------------------------------------------------------ 00:07:34.153 0,0 139552/s 553 MiB/s 0 0 00:07:34.153 ==================================================================================== 00:07:34.153 Total 139552/s 545 MiB/s 0 0' 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.153 06:47:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:34.153 06:47:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:34.153 06:47:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.153 06:47:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.153 06:47:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.153 06:47:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.153 06:47:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.153 06:47:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.153 06:47:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.153 06:47:55 -- accel/accel.sh@42 -- # jq -r . 00:07:34.153 [2024-12-15 06:47:55.485448] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.153 [2024-12-15 06:47:55.485516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199606 ] 00:07:34.153 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.153 [2024-12-15 06:47:55.554470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.153 [2024-12-15 06:47:55.588342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.153 06:47:55 -- accel/accel.sh@21 -- # val= 00:07:34.153 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.153 06:47:55 -- accel/accel.sh@21 -- # val= 00:07:34.153 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.153 06:47:55 -- accel/accel.sh@21 -- # val=0x1 00:07:34.153 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.153 06:47:55 -- accel/accel.sh@21 -- # val= 00:07:34.153 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.153 06:47:55 -- accel/accel.sh@21 -- # val= 00:07:34.153 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.153 06:47:55 -- accel/accel.sh@21 -- # val=dif_verify 00:07:34.153 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.153 06:47:55 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.153 06:47:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.153 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.153 06:47:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.153 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.153 06:47:55 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:34.153 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.153 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.154 06:47:55 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:34.154 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.154 06:47:55 -- accel/accel.sh@21 -- # val= 00:07:34.154 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.154 06:47:55 -- accel/accel.sh@21 -- # val=software 00:07:34.154 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.154 06:47:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.154 06:47:55 -- accel/accel.sh@21 -- # val=32 00:07:34.154 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.154 06:47:55 -- accel/accel.sh@21 -- # val=32 00:07:34.154 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.154 06:47:55 -- accel/accel.sh@21 -- # val=1 00:07:34.154 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.154 06:47:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.154 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.154 06:47:55 -- accel/accel.sh@21 -- # val=No 00:07:34.154 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.154 06:47:55 -- accel/accel.sh@21 -- # val= 00:07:34.154 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:34.154 06:47:55 -- accel/accel.sh@21 -- # val= 00:07:34.154 06:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:34.154 06:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:35.531 06:47:56 -- accel/accel.sh@21 -- # val= 00:07:35.531 06:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.531 06:47:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.531 06:47:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.531 06:47:56 -- accel/accel.sh@21 -- # val= 00:07:35.531 06:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.531 06:47:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.531 06:47:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.531 06:47:56 -- accel/accel.sh@21 -- # val= 00:07:35.531 06:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.531 06:47:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.531 06:47:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.531 06:47:56 -- accel/accel.sh@21 -- # val= 00:07:35.531 06:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.531 06:47:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.531 06:47:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.531 06:47:56 -- accel/accel.sh@21 -- # val= 00:07:35.531 06:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.531 06:47:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.531 06:47:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.531 06:47:56 -- accel/accel.sh@21 -- # val= 00:07:35.531 06:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.531 06:47:56 -- accel/accel.sh@20 -- # IFS=: 00:07:35.531 06:47:56 -- accel/accel.sh@20 -- # read -r var val 00:07:35.531 06:47:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.531 06:47:56 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:35.531 06:47:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.531 00:07:35.531 real 0m2.600s 00:07:35.531 user 0m2.353s 00:07:35.531 sys 0m0.257s 00:07:35.531 06:47:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.531 06:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:35.531 ************************************ 00:07:35.531 END TEST accel_dif_verify 00:07:35.531 ************************************ 00:07:35.531 06:47:56 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:35.531 06:47:56 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:35.531 06:47:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.531 06:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:35.532 ************************************ 00:07:35.532 START TEST accel_dif_generate 00:07:35.532 ************************************ 00:07:35.532 06:47:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:35.532 06:47:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.532 06:47:56 -- accel/accel.sh@17 -- # local accel_module 00:07:35.532 06:47:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:35.532 06:47:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:35.532 06:47:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.532 06:47:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.532 06:47:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.532 06:47:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.532 06:47:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.532 06:47:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.532 06:47:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.532 06:47:56 -- accel/accel.sh@42 -- # jq -r . 00:07:35.532 [2024-12-15 06:47:56.818474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:35.532 [2024-12-15 06:47:56.818532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199808 ] 00:07:35.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.532 [2024-12-15 06:47:56.883587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.532 [2024-12-15 06:47:56.918886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.567 06:47:58 -- accel/accel.sh@18 -- # out=' 00:07:36.567 SPDK Configuration: 00:07:36.567 Core mask: 0x1 00:07:36.567 00:07:36.567 Accel Perf Configuration: 00:07:36.567 Workload Type: dif_generate 00:07:36.567 Vector size: 4096 bytes 00:07:36.567 Transfer size: 4096 bytes 00:07:36.567 Block size: 512 bytes 00:07:36.567 Metadata size: 8 bytes 00:07:36.567 Vector count 1 00:07:36.567 Module: software 00:07:36.567 Queue depth: 32 00:07:36.567 Allocate depth: 32 00:07:36.567 # threads/core: 1 00:07:36.567 Run time: 1 seconds 00:07:36.567 Verify: No 00:07:36.567 00:07:36.567 Running for 1 seconds... 00:07:36.567 00:07:36.567 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.567 ------------------------------------------------------------------------------------ 00:07:36.567 0,0 166048/s 658 MiB/s 0 0 00:07:36.567 ==================================================================================== 00:07:36.567 Total 166048/s 648 MiB/s 0 0' 00:07:36.567 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.567 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.567 06:47:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:36.567 06:47:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:36.567 06:47:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.567 06:47:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.567 06:47:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.567 06:47:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.567 06:47:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.567 06:47:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.567 06:47:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.567 06:47:58 -- accel/accel.sh@42 -- # jq -r . 00:07:36.568 [2024-12-15 06:47:58.110446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:36.568 [2024-12-15 06:47:58.110516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200079 ] 00:07:36.568 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.568 [2024-12-15 06:47:58.179763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.827 [2024-12-15 06:47:58.214291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val= 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val= 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val=0x1 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val= 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val= 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val=dif_generate 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val= 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val=software 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val=32 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val=32 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val=1 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val=No 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val= 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.827 06:47:58 -- accel/accel.sh@21 -- # val= 00:07:36.827 06:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.827 06:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:37.764 06:47:59 -- accel/accel.sh@21 -- # val= 00:07:37.764 06:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.764 06:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.764 06:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.764 06:47:59 -- accel/accel.sh@21 -- # val= 00:07:37.764 06:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.764 06:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.764 06:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.764 06:47:59 -- accel/accel.sh@21 -- # val= 00:07:37.764 06:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.764 06:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.764 06:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.764 06:47:59 -- accel/accel.sh@21 -- # val= 00:07:37.764 06:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.764 06:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.765 06:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.765 06:47:59 -- accel/accel.sh@21 -- # val= 00:07:37.765 06:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.765 06:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.765 06:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.765 06:47:59 -- accel/accel.sh@21 -- # val= 00:07:37.765 06:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.765 06:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.765 06:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.765 06:47:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.765 06:47:59 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:37.765 06:47:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.765 00:07:37.765 real 0m2.581s 00:07:37.765 user 0m2.339s 00:07:37.765 sys 0m0.253s 00:07:37.765 06:47:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.765 06:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:37.765 ************************************ 00:07:37.765 END TEST accel_dif_generate 00:07:37.765 ************************************ 00:07:38.024 06:47:59 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:38.024 06:47:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:38.024 06:47:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.024 06:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:38.024 ************************************ 00:07:38.024 START TEST accel_dif_generate_copy 00:07:38.024 ************************************ 00:07:38.024 06:47:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:38.024 06:47:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.024 06:47:59 -- accel/accel.sh@17 -- # local accel_module 00:07:38.024 06:47:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:38.024 06:47:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:38.024 06:47:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.024 06:47:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.024 06:47:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.024 06:47:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.024 06:47:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.024 06:47:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.024 06:47:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.024 06:47:59 -- accel/accel.sh@42 -- # jq -r . 00:07:38.024 [2024-12-15 06:47:59.460292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.024 [2024-12-15 06:47:59.460373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200360 ] 00:07:38.024 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.024 [2024-12-15 06:47:59.531039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.024 [2024-12-15 06:47:59.566099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.402 06:48:00 -- accel/accel.sh@18 -- # out=' 00:07:39.402 SPDK Configuration: 00:07:39.402 Core mask: 0x1 00:07:39.402 00:07:39.402 Accel Perf Configuration: 00:07:39.402 Workload Type: dif_generate_copy 00:07:39.402 Vector size: 4096 bytes 00:07:39.402 Transfer size: 4096 bytes 00:07:39.402 Vector count 1 00:07:39.402 Module: software 00:07:39.402 Queue depth: 32 00:07:39.402 Allocate depth: 32 00:07:39.402 # threads/core: 1 00:07:39.402 Run time: 1 seconds 00:07:39.402 Verify: No 00:07:39.402 00:07:39.402 Running for 1 seconds... 00:07:39.402 00:07:39.402 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.402 ------------------------------------------------------------------------------------ 00:07:39.402 0,0 122784/s 487 MiB/s 0 0 00:07:39.402 ==================================================================================== 00:07:39.402 Total 122784/s 479 MiB/s 0 0' 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:39.402 06:48:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:39.402 06:48:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.402 06:48:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.402 06:48:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.402 06:48:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.402 06:48:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.402 06:48:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.402 06:48:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.402 06:48:00 -- accel/accel.sh@42 -- # jq -r . 00:07:39.402 [2024-12-15 06:48:00.762618] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.402 [2024-12-15 06:48:00.762706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200685 ] 00:07:39.402 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.402 [2024-12-15 06:48:00.835952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.402 [2024-12-15 06:48:00.871177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val= 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val= 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val=0x1 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val= 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val= 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val= 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val=software 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val=32 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val=32 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val=1 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val=No 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val= 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:39.402 06:48:00 -- accel/accel.sh@21 -- # val= 00:07:39.402 06:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:39.402 06:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:40.780 06:48:02 -- accel/accel.sh@21 -- # val= 00:07:40.780 06:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.780 06:48:02 -- accel/accel.sh@20 -- # IFS=: 00:07:40.780 06:48:02 -- accel/accel.sh@20 -- # read -r var val 00:07:40.780 06:48:02 -- accel/accel.sh@21 -- # val= 00:07:40.780 06:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.780 06:48:02 -- accel/accel.sh@20 -- # IFS=: 00:07:40.780 06:48:02 -- accel/accel.sh@20 -- # read -r var val 00:07:40.780 06:48:02 -- accel/accel.sh@21 -- # val= 00:07:40.780 06:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.780 06:48:02 -- accel/accel.sh@20 -- # IFS=: 00:07:40.780 06:48:02 -- accel/accel.sh@20 -- # read -r var val 00:07:40.780 06:48:02 -- accel/accel.sh@21 -- # val= 00:07:40.780 06:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.780 06:48:02 -- accel/accel.sh@20 -- # IFS=: 00:07:40.780 06:48:02 -- accel/accel.sh@20 -- # read -r var val 00:07:40.780 06:48:02 -- accel/accel.sh@21 -- # val= 00:07:40.780 06:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.780 06:48:02 -- accel/accel.sh@20 -- # IFS=: 00:07:40.780 06:48:02 -- accel/accel.sh@20 -- # read -r var val 00:07:40.780 06:48:02 -- accel/accel.sh@21 -- # val= 00:07:40.780 06:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.780 06:48:02 -- accel/accel.sh@20 -- # IFS=: 00:07:40.780 06:48:02 -- accel/accel.sh@20 -- # read -r var val 00:07:40.780 06:48:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.780 06:48:02 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:40.780 06:48:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.780 00:07:40.780 real 0m2.610s 00:07:40.780 user 0m2.358s 00:07:40.780 sys 0m0.259s 00:07:40.780 06:48:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.780 06:48:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.780 ************************************ 00:07:40.780 END TEST accel_dif_generate_copy 00:07:40.780 ************************************ 00:07:40.780 06:48:02 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:40.780 06:48:02 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:40.780 06:48:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:40.780 06:48:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.780 06:48:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.780 ************************************ 00:07:40.780 START TEST accel_comp 00:07:40.780 ************************************ 00:07:40.780 06:48:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:40.780 06:48:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.780 06:48:02 -- accel/accel.sh@17 -- # local accel_module 00:07:40.780 06:48:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:40.780 06:48:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.780 06:48:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:40.780 06:48:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.780 06:48:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.780 06:48:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.780 06:48:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.780 06:48:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.780 06:48:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.780 06:48:02 -- accel/accel.sh@42 -- # jq -r . 00:07:40.780 [2024-12-15 06:48:02.114361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.780 [2024-12-15 06:48:02.114430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201040 ] 00:07:40.780 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.780 [2024-12-15 06:48:02.182650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.780 [2024-12-15 06:48:02.217360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.159 06:48:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:42.159 00:07:42.159 SPDK Configuration: 00:07:42.159 Core mask: 0x1 00:07:42.159 00:07:42.159 Accel Perf Configuration: 00:07:42.159 Workload Type: compress 00:07:42.159 Transfer size: 4096 bytes 00:07:42.159 Vector count 1 00:07:42.159 Module: software 00:07:42.159 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:42.159 Queue depth: 32 00:07:42.159 Allocate depth: 32 00:07:42.159 # threads/core: 1 00:07:42.159 Run time: 1 seconds 00:07:42.159 Verify: No 00:07:42.159 00:07:42.159 Running for 1 seconds... 00:07:42.159 00:07:42.159 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:42.159 ------------------------------------------------------------------------------------ 00:07:42.159 0,0 64992/s 270 MiB/s 0 0 00:07:42.159 ==================================================================================== 00:07:42.159 Total 64992/s 253 MiB/s 0 0' 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:42.159 06:48:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:42.159 06:48:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.159 06:48:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.159 06:48:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.159 06:48:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.159 06:48:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.159 06:48:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.159 06:48:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.159 06:48:03 -- accel/accel.sh@42 -- # jq -r . 00:07:42.159 [2024-12-15 06:48:03.413595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.159 [2024-12-15 06:48:03.413678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201248 ] 00:07:42.159 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.159 [2024-12-15 06:48:03.484132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.159 [2024-12-15 06:48:03.518079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val= 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val= 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val= 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val=0x1 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val= 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val= 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val=compress 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val= 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val=software 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val=32 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val=32 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val=1 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val=No 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val= 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.159 06:48:03 -- accel/accel.sh@21 -- # val= 00:07:42.159 06:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:42.159 06:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:43.096 06:48:04 -- accel/accel.sh@21 -- # val= 00:07:43.096 06:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.096 06:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:43.096 06:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:43.096 06:48:04 -- accel/accel.sh@21 -- # val= 00:07:43.096 06:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.096 06:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:43.096 06:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:43.096 06:48:04 -- accel/accel.sh@21 -- # val= 00:07:43.096 06:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.096 06:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:43.096 06:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:43.096 06:48:04 -- accel/accel.sh@21 -- # val= 00:07:43.096 06:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.096 06:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:43.096 06:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:43.096 06:48:04 -- accel/accel.sh@21 -- # val= 00:07:43.096 06:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.096 06:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:43.096 06:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:43.096 06:48:04 -- accel/accel.sh@21 -- # val= 00:07:43.096 06:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.096 06:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:43.096 06:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:43.096 06:48:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.096 06:48:04 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:43.096 06:48:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.096 00:07:43.096 real 0m2.607s 00:07:43.096 user 0m2.356s 00:07:43.096 sys 0m0.257s 00:07:43.096 06:48:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.096 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:43.096 ************************************ 00:07:43.096 END TEST accel_comp 00:07:43.096 ************************************ 00:07:43.096 06:48:04 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:43.096 06:48:04 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:43.096 06:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.096 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:43.356 ************************************ 00:07:43.356 START TEST accel_decomp 00:07:43.356 ************************************ 00:07:43.356 06:48:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:43.356 06:48:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.356 06:48:04 -- accel/accel.sh@17 -- # local accel_module 00:07:43.356 06:48:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:43.356 06:48:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:43.356 06:48:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.356 06:48:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.356 06:48:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.356 06:48:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.356 06:48:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.356 06:48:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.356 06:48:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.356 06:48:04 -- accel/accel.sh@42 -- # jq -r . 00:07:43.356 [2024-12-15 06:48:04.768139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:43.356 [2024-12-15 06:48:04.768216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201703 ] 00:07:43.356 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.356 [2024-12-15 06:48:04.839997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.356 [2024-12-15 06:48:04.877142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.732 06:48:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:44.732 00:07:44.732 SPDK Configuration: 00:07:44.732 Core mask: 0x1 00:07:44.732 00:07:44.732 Accel Perf Configuration: 00:07:44.732 Workload Type: decompress 00:07:44.732 Transfer size: 4096 bytes 00:07:44.732 Vector count 1 00:07:44.732 Module: software 00:07:44.732 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:44.732 Queue depth: 32 00:07:44.732 Allocate depth: 32 00:07:44.732 # threads/core: 1 00:07:44.732 Run time: 1 seconds 00:07:44.732 Verify: Yes 00:07:44.732 00:07:44.732 Running for 1 seconds... 00:07:44.732 00:07:44.732 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.732 ------------------------------------------------------------------------------------ 00:07:44.732 0,0 87168/s 160 MiB/s 0 0 00:07:44.732 ==================================================================================== 00:07:44.732 Total 87168/s 340 MiB/s 0 0' 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:44.732 06:48:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:44.732 06:48:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.732 06:48:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.732 06:48:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.732 06:48:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.732 06:48:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.732 06:48:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.732 06:48:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.732 06:48:06 -- accel/accel.sh@42 -- # jq -r . 00:07:44.732 [2024-12-15 06:48:06.069463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:44.732 [2024-12-15 06:48:06.069534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202040 ] 00:07:44.732 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.732 [2024-12-15 06:48:06.140774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.732 [2024-12-15 06:48:06.174696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val= 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val= 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val= 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val=0x1 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val= 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val= 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val=decompress 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val= 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val=software 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val=32 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val=32 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val=1 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val=Yes 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val= 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.732 06:48:06 -- accel/accel.sh@21 -- # val= 00:07:44.732 06:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.732 06:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:46.114 06:48:07 -- accel/accel.sh@21 -- # val= 00:07:46.114 06:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.114 06:48:07 -- accel/accel.sh@20 -- # IFS=: 00:07:46.114 06:48:07 -- accel/accel.sh@20 -- # read -r var val 00:07:46.114 06:48:07 -- accel/accel.sh@21 -- # val= 00:07:46.114 06:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.114 06:48:07 -- accel/accel.sh@20 -- # IFS=: 00:07:46.114 06:48:07 -- accel/accel.sh@20 -- # read -r var val 00:07:46.114 06:48:07 -- accel/accel.sh@21 -- # val= 00:07:46.114 06:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.114 06:48:07 -- accel/accel.sh@20 -- # IFS=: 00:07:46.114 06:48:07 -- accel/accel.sh@20 -- # read -r var val 00:07:46.114 06:48:07 -- accel/accel.sh@21 -- # val= 00:07:46.114 06:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.114 06:48:07 -- accel/accel.sh@20 -- # IFS=: 00:07:46.115 06:48:07 -- accel/accel.sh@20 -- # read -r var val 00:07:46.115 06:48:07 -- accel/accel.sh@21 -- # val= 00:07:46.115 06:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.115 06:48:07 -- accel/accel.sh@20 -- # IFS=: 00:07:46.115 06:48:07 -- accel/accel.sh@20 -- # read -r var val 00:07:46.115 06:48:07 -- accel/accel.sh@21 -- # val= 00:07:46.115 06:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.115 06:48:07 -- accel/accel.sh@20 -- # IFS=: 00:07:46.115 06:48:07 -- accel/accel.sh@20 -- # read -r var val 00:07:46.115 06:48:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.115 06:48:07 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:46.115 06:48:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.115 00:07:46.115 real 0m2.606s 00:07:46.115 user 0m2.348s 00:07:46.115 sys 0m0.265s 00:07:46.115 06:48:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.115 06:48:07 -- common/autotest_common.sh@10 -- # set +x 00:07:46.115 ************************************ 00:07:46.115 END TEST accel_decomp 00:07:46.115 ************************************ 00:07:46.115 06:48:07 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.115 06:48:07 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:46.115 06:48:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.115 06:48:07 -- common/autotest_common.sh@10 -- # set +x 00:07:46.115 ************************************ 00:07:46.115 START TEST accel_decmop_full 00:07:46.115 ************************************ 00:07:46.115 06:48:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.115 06:48:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.115 06:48:07 -- accel/accel.sh@17 -- # local accel_module 00:07:46.115 06:48:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.115 06:48:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.115 06:48:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.115 06:48:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.115 06:48:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.115 06:48:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.115 06:48:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.115 06:48:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.115 06:48:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.115 06:48:07 -- accel/accel.sh@42 -- # jq -r . 00:07:46.115 [2024-12-15 06:48:07.416996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.115 [2024-12-15 06:48:07.417066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202323 ] 00:07:46.115 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.115 [2024-12-15 06:48:07.484263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.115 [2024-12-15 06:48:07.518730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.493 06:48:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:47.493 00:07:47.493 SPDK Configuration: 00:07:47.493 Core mask: 0x1 00:07:47.493 00:07:47.493 Accel Perf Configuration: 00:07:47.493 Workload Type: decompress 00:07:47.493 Transfer size: 111250 bytes 00:07:47.493 Vector count 1 00:07:47.493 Module: software 00:07:47.493 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:47.493 Queue depth: 32 00:07:47.493 Allocate depth: 32 00:07:47.493 # threads/core: 1 00:07:47.493 Run time: 1 seconds 00:07:47.493 Verify: Yes 00:07:47.493 00:07:47.493 Running for 1 seconds... 00:07:47.493 00:07:47.493 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.493 ------------------------------------------------------------------------------------ 00:07:47.494 0,0 5632/s 232 MiB/s 0 0 00:07:47.494 ==================================================================================== 00:07:47.494 Total 5632/s 597 MiB/s 0 0' 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:47.494 06:48:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:47.494 06:48:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.494 06:48:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.494 06:48:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.494 06:48:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.494 06:48:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.494 06:48:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.494 06:48:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.494 06:48:08 -- accel/accel.sh@42 -- # jq -r . 00:07:47.494 [2024-12-15 06:48:08.721008] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.494 [2024-12-15 06:48:08.721075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202589 ] 00:07:47.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.494 [2024-12-15 06:48:08.789524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.494 [2024-12-15 06:48:08.823612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val= 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val= 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val= 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val=0x1 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val= 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val= 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val=decompress 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val= 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val=software 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val=32 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val=32 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val=1 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val=Yes 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val= 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:47.494 06:48:08 -- accel/accel.sh@21 -- # val= 00:07:47.494 06:48:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # IFS=: 00:07:47.494 06:48:08 -- accel/accel.sh@20 -- # read -r var val 00:07:48.430 06:48:09 -- accel/accel.sh@21 -- # val= 00:07:48.430 06:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.430 06:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:48.430 06:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:48.430 06:48:09 -- accel/accel.sh@21 -- # val= 00:07:48.430 06:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.430 06:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:48.430 06:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:48.430 06:48:09 -- accel/accel.sh@21 -- # val= 00:07:48.430 06:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.430 06:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:48.430 06:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:48.430 06:48:09 -- accel/accel.sh@21 -- # val= 00:07:48.430 06:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.430 06:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:48.430 06:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:48.431 06:48:09 -- accel/accel.sh@21 -- # val= 00:07:48.431 06:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.431 06:48:10 -- accel/accel.sh@20 -- # IFS=: 00:07:48.431 06:48:10 -- accel/accel.sh@20 -- # read -r var val 00:07:48.431 06:48:10 -- accel/accel.sh@21 -- # val= 00:07:48.431 06:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.431 06:48:10 -- accel/accel.sh@20 -- # IFS=: 00:07:48.431 06:48:10 -- accel/accel.sh@20 -- # read -r var val 00:07:48.431 06:48:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.431 06:48:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:48.431 06:48:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.431 00:07:48.431 real 0m2.612s 00:07:48.431 user 0m2.367s 00:07:48.431 sys 0m0.251s 00:07:48.431 06:48:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.431 06:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:48.431 ************************************ 00:07:48.431 END TEST accel_decmop_full 00:07:48.431 ************************************ 00:07:48.431 06:48:10 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:48.431 06:48:10 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:48.431 06:48:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.431 06:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:48.431 ************************************ 00:07:48.431 START TEST accel_decomp_mcore 00:07:48.431 ************************************ 00:07:48.431 06:48:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:48.431 06:48:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.431 06:48:10 -- accel/accel.sh@17 -- # local accel_module 00:07:48.431 06:48:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:48.431 06:48:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:48.431 06:48:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.431 06:48:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.431 06:48:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.431 06:48:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.431 06:48:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.431 06:48:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.431 06:48:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.431 06:48:10 -- accel/accel.sh@42 -- # jq -r . 00:07:48.689 [2024-12-15 06:48:10.077800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.689 [2024-12-15 06:48:10.077868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202878 ] 00:07:48.689 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.689 [2024-12-15 06:48:10.149620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.689 [2024-12-15 06:48:10.188865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.689 [2024-12-15 06:48:10.188961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.689 [2024-12-15 06:48:10.189023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.689 [2024-12-15 06:48:10.189025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.064 06:48:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:50.064 00:07:50.064 SPDK Configuration: 00:07:50.064 Core mask: 0xf 00:07:50.064 00:07:50.064 Accel Perf Configuration: 00:07:50.064 Workload Type: decompress 00:07:50.064 Transfer size: 4096 bytes 00:07:50.064 Vector count 1 00:07:50.064 Module: software 00:07:50.064 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:50.064 Queue depth: 32 00:07:50.064 Allocate depth: 32 00:07:50.064 # threads/core: 1 00:07:50.064 Run time: 1 seconds 00:07:50.064 Verify: Yes 00:07:50.064 00:07:50.064 Running for 1 seconds... 00:07:50.064 00:07:50.064 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.064 ------------------------------------------------------------------------------------ 00:07:50.064 0,0 72992/s 134 MiB/s 0 0 00:07:50.064 3,0 73664/s 135 MiB/s 0 0 00:07:50.064 2,0 73440/s 135 MiB/s 0 0 00:07:50.064 1,0 73472/s 135 MiB/s 0 0 00:07:50.064 ==================================================================================== 00:07:50.064 Total 293568/s 1146 MiB/s 0 0' 00:07:50.064 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.064 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.064 06:48:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:50.064 06:48:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.064 06:48:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:50.064 06:48:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.064 06:48:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.064 06:48:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.064 06:48:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.064 06:48:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.064 06:48:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.064 06:48:11 -- accel/accel.sh@42 -- # jq -r . 00:07:50.064 [2024-12-15 06:48:11.392186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:50.064 [2024-12-15 06:48:11.392256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203147 ] 00:07:50.064 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.064 [2024-12-15 06:48:11.462299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.064 [2024-12-15 06:48:11.498941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.064 [2024-12-15 06:48:11.499041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.064 [2024-12-15 06:48:11.499062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.064 [2024-12-15 06:48:11.499064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.064 06:48:11 -- accel/accel.sh@21 -- # val= 00:07:50.064 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.064 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.064 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.064 06:48:11 -- accel/accel.sh@21 -- # val= 00:07:50.064 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.064 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.064 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.064 06:48:11 -- accel/accel.sh@21 -- # val= 00:07:50.064 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.064 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.064 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.064 06:48:11 -- accel/accel.sh@21 -- # val=0xf 00:07:50.064 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.064 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val= 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val= 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val=decompress 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val= 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val=software 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val=32 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val=32 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val=1 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val=Yes 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val= 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:50.065 06:48:11 -- accel/accel.sh@21 -- # val= 00:07:50.065 06:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # IFS=: 00:07:50.065 06:48:11 -- accel/accel.sh@20 -- # read -r var val 00:07:51.442 06:48:12 -- accel/accel.sh@21 -- # val= 00:07:51.442 06:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.442 06:48:12 -- accel/accel.sh@21 -- # val= 00:07:51.442 06:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.442 06:48:12 -- accel/accel.sh@21 -- # val= 00:07:51.442 06:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.442 06:48:12 -- accel/accel.sh@21 -- # val= 00:07:51.442 06:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.442 06:48:12 -- accel/accel.sh@21 -- # val= 00:07:51.442 06:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.442 06:48:12 -- accel/accel.sh@21 -- # val= 00:07:51.442 06:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.442 06:48:12 -- accel/accel.sh@21 -- # val= 00:07:51.442 06:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.442 06:48:12 -- accel/accel.sh@21 -- # val= 00:07:51.442 06:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.442 06:48:12 -- accel/accel.sh@21 -- # val= 00:07:51.442 06:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:51.442 06:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.442 06:48:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:51.442 06:48:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:51.442 06:48:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.442 00:07:51.442 real 0m2.633s 00:07:51.442 user 0m9.015s 00:07:51.442 sys 0m0.282s 00:07:51.442 06:48:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.442 06:48:12 -- common/autotest_common.sh@10 -- # set +x 00:07:51.442 ************************************ 00:07:51.442 END TEST accel_decomp_mcore 00:07:51.442 ************************************ 00:07:51.442 06:48:12 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:51.442 06:48:12 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:51.442 06:48:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.442 06:48:12 -- common/autotest_common.sh@10 -- # set +x 00:07:51.442 ************************************ 00:07:51.442 START TEST accel_decomp_full_mcore 00:07:51.442 ************************************ 00:07:51.442 06:48:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:51.442 06:48:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.442 06:48:12 -- accel/accel.sh@17 -- # local accel_module 00:07:51.442 06:48:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:51.442 06:48:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:51.442 06:48:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.442 06:48:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.442 06:48:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.442 06:48:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.442 06:48:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.442 06:48:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.442 06:48:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.442 06:48:12 -- accel/accel.sh@42 -- # jq -r . 00:07:51.442 [2024-12-15 06:48:12.753652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:51.442 [2024-12-15 06:48:12.753736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203426 ] 00:07:51.442 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.442 [2024-12-15 06:48:12.825362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.442 [2024-12-15 06:48:12.862782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.442 [2024-12-15 06:48:12.862878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.442 [2024-12-15 06:48:12.862960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.442 [2024-12-15 06:48:12.862962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.820 06:48:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:52.820 00:07:52.820 SPDK Configuration: 00:07:52.820 Core mask: 0xf 00:07:52.820 00:07:52.820 Accel Perf Configuration: 00:07:52.820 Workload Type: decompress 00:07:52.820 Transfer size: 111250 bytes 00:07:52.820 Vector count 1 00:07:52.820 Module: software 00:07:52.820 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:52.820 Queue depth: 32 00:07:52.820 Allocate depth: 32 00:07:52.820 # threads/core: 1 00:07:52.820 Run time: 1 seconds 00:07:52.820 Verify: Yes 00:07:52.820 00:07:52.820 Running for 1 seconds... 00:07:52.820 00:07:52.820 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:52.820 ------------------------------------------------------------------------------------ 00:07:52.820 0,0 5696/s 235 MiB/s 0 0 00:07:52.820 3,0 5696/s 235 MiB/s 0 0 00:07:52.820 2,0 5696/s 235 MiB/s 0 0 00:07:52.820 1,0 5696/s 235 MiB/s 0 0 00:07:52.820 ==================================================================================== 00:07:52.820 Total 22784/s 2417 MiB/s 0 0' 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.820 06:48:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.820 06:48:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.820 06:48:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.820 06:48:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.820 06:48:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.820 06:48:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.820 06:48:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.820 06:48:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.820 06:48:14 -- accel/accel.sh@42 -- # jq -r . 00:07:52.820 [2024-12-15 06:48:14.074875] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.820 [2024-12-15 06:48:14.074943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203594 ] 00:07:52.820 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.820 [2024-12-15 06:48:14.144412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.820 [2024-12-15 06:48:14.181362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.820 [2024-12-15 06:48:14.181456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.820 [2024-12-15 06:48:14.181539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.820 [2024-12-15 06:48:14.181541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val= 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val= 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val= 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val=0xf 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val= 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val= 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val=decompress 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val= 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val=software 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val=32 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val=32 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val=1 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.820 06:48:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:52.820 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.820 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.821 06:48:14 -- accel/accel.sh@21 -- # val=Yes 00:07:52.821 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.821 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.821 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.821 06:48:14 -- accel/accel.sh@21 -- # val= 00:07:52.821 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.821 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.821 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:52.821 06:48:14 -- accel/accel.sh@21 -- # val= 00:07:52.821 06:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.821 06:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:52.821 06:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:53.756 06:48:15 -- accel/accel.sh@21 -- # val= 00:07:53.756 06:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.756 06:48:15 -- accel/accel.sh@21 -- # val= 00:07:53.756 06:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.756 06:48:15 -- accel/accel.sh@21 -- # val= 00:07:53.756 06:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.756 06:48:15 -- accel/accel.sh@21 -- # val= 00:07:53.756 06:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.756 06:48:15 -- accel/accel.sh@21 -- # val= 00:07:53.756 06:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.756 06:48:15 -- accel/accel.sh@21 -- # val= 00:07:53.756 06:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.756 06:48:15 -- accel/accel.sh@21 -- # val= 00:07:53.756 06:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.756 06:48:15 -- accel/accel.sh@21 -- # val= 00:07:53.756 06:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.756 06:48:15 -- accel/accel.sh@21 -- # val= 00:07:53.756 06:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.756 06:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.756 06:48:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:53.756 06:48:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:53.756 06:48:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.756 00:07:53.756 real 0m2.648s 00:07:53.756 user 0m9.080s 00:07:53.756 sys 0m0.290s 00:07:53.756 06:48:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.756 06:48:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.756 ************************************ 00:07:53.756 END TEST accel_decomp_full_mcore 00:07:53.756 ************************************ 00:07:54.015 06:48:15 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:54.015 06:48:15 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:54.015 06:48:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.015 06:48:15 -- common/autotest_common.sh@10 -- # set +x 00:07:54.015 ************************************ 00:07:54.015 START TEST accel_decomp_mthread 00:07:54.015 ************************************ 00:07:54.015 06:48:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:54.015 06:48:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.015 06:48:15 -- accel/accel.sh@17 -- # local accel_module 00:07:54.015 06:48:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:54.015 06:48:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:54.015 06:48:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.015 06:48:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.015 06:48:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.015 06:48:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.015 06:48:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.015 06:48:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.015 06:48:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.015 06:48:15 -- accel/accel.sh@42 -- # jq -r . 00:07:54.015 [2024-12-15 06:48:15.440989] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.015 [2024-12-15 06:48:15.441056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203779 ] 00:07:54.015 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.015 [2024-12-15 06:48:15.510026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.015 [2024-12-15 06:48:15.545636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.392 06:48:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:55.392 00:07:55.392 SPDK Configuration: 00:07:55.392 Core mask: 0x1 00:07:55.392 00:07:55.392 Accel Perf Configuration: 00:07:55.392 Workload Type: decompress 00:07:55.392 Transfer size: 4096 bytes 00:07:55.392 Vector count 1 00:07:55.392 Module: software 00:07:55.392 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:55.392 Queue depth: 32 00:07:55.392 Allocate depth: 32 00:07:55.392 # threads/core: 2 00:07:55.392 Run time: 1 seconds 00:07:55.392 Verify: Yes 00:07:55.392 00:07:55.392 Running for 1 seconds... 00:07:55.392 00:07:55.392 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:55.392 ------------------------------------------------------------------------------------ 00:07:55.392 0,1 43680/s 80 MiB/s 0 0 00:07:55.392 0,0 43584/s 80 MiB/s 0 0 00:07:55.393 ==================================================================================== 00:07:55.393 Total 87264/s 340 MiB/s 0 0' 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:55.393 06:48:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:55.393 06:48:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.393 06:48:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.393 06:48:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.393 06:48:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.393 06:48:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.393 06:48:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.393 06:48:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.393 06:48:16 -- accel/accel.sh@42 -- # jq -r . 00:07:55.393 [2024-12-15 06:48:16.744705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:55.393 [2024-12-15 06:48:16.744774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204018 ] 00:07:55.393 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.393 [2024-12-15 06:48:16.814056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.393 [2024-12-15 06:48:16.848371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val= 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val= 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val= 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val=0x1 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val= 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val= 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val=decompress 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val= 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val=software 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val=32 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val=32 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val=2 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val=Yes 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val= 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:55.393 06:48:16 -- accel/accel.sh@21 -- # val= 00:07:55.393 06:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:55.393 06:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.771 06:48:18 -- accel/accel.sh@21 -- # val= 00:07:56.771 06:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.771 06:48:18 -- accel/accel.sh@21 -- # val= 00:07:56.771 06:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.771 06:48:18 -- accel/accel.sh@21 -- # val= 00:07:56.771 06:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.771 06:48:18 -- accel/accel.sh@21 -- # val= 00:07:56.771 06:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.771 06:48:18 -- accel/accel.sh@21 -- # val= 00:07:56.771 06:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.771 06:48:18 -- accel/accel.sh@21 -- # val= 00:07:56.771 06:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.771 06:48:18 -- accel/accel.sh@21 -- # val= 00:07:56.771 06:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.771 06:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.771 06:48:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:56.771 06:48:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:56.771 06:48:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.771 00:07:56.771 real 0m2.611s 00:07:56.771 user 0m2.366s 00:07:56.771 sys 0m0.254s 00:07:56.771 06:48:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.771 06:48:18 -- common/autotest_common.sh@10 -- # set +x 00:07:56.771 ************************************ 00:07:56.771 END TEST accel_decomp_mthread 00:07:56.771 ************************************ 00:07:56.771 06:48:18 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.771 06:48:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:56.771 06:48:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.771 06:48:18 -- common/autotest_common.sh@10 -- # set +x 00:07:56.771 ************************************ 00:07:56.771 START TEST accel_deomp_full_mthread 00:07:56.771 ************************************ 00:07:56.771 06:48:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.771 06:48:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:56.771 06:48:18 -- accel/accel.sh@17 -- # local accel_module 00:07:56.771 06:48:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.771 06:48:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.771 06:48:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.771 06:48:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.771 06:48:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.771 06:48:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.771 06:48:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.771 06:48:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.771 06:48:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.771 06:48:18 -- accel/accel.sh@42 -- # jq -r . 00:07:56.771 [2024-12-15 06:48:18.100105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.771 [2024-12-15 06:48:18.100173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204304 ] 00:07:56.771 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.771 [2024-12-15 06:48:18.168911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.771 [2024-12-15 06:48:18.203484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.148 06:48:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:58.148 00:07:58.148 SPDK Configuration: 00:07:58.148 Core mask: 0x1 00:07:58.148 00:07:58.148 Accel Perf Configuration: 00:07:58.148 Workload Type: decompress 00:07:58.148 Transfer size: 111250 bytes 00:07:58.148 Vector count 1 00:07:58.148 Module: software 00:07:58.148 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:58.148 Queue depth: 32 00:07:58.148 Allocate depth: 32 00:07:58.148 # threads/core: 2 00:07:58.148 Run time: 1 seconds 00:07:58.148 Verify: Yes 00:07:58.148 00:07:58.148 Running for 1 seconds... 00:07:58.148 00:07:58.148 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:58.148 ------------------------------------------------------------------------------------ 00:07:58.148 0,1 2880/s 118 MiB/s 0 0 00:07:58.148 0,0 2816/s 116 MiB/s 0 0 00:07:58.148 ==================================================================================== 00:07:58.148 Total 5696/s 604 MiB/s 0 0' 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.148 06:48:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:58.148 06:48:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:58.148 06:48:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.148 06:48:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.148 06:48:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.148 06:48:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.148 06:48:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.148 06:48:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.148 06:48:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.148 06:48:19 -- accel/accel.sh@42 -- # jq -r . 00:07:58.148 [2024-12-15 06:48:19.420333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:58.148 [2024-12-15 06:48:19.420415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204575 ] 00:07:58.148 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.148 [2024-12-15 06:48:19.491754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.148 [2024-12-15 06:48:19.525616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.148 06:48:19 -- accel/accel.sh@21 -- # val= 00:07:58.148 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.148 06:48:19 -- accel/accel.sh@21 -- # val= 00:07:58.148 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.148 06:48:19 -- accel/accel.sh@21 -- # val= 00:07:58.148 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.148 06:48:19 -- accel/accel.sh@21 -- # val=0x1 00:07:58.148 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.148 06:48:19 -- accel/accel.sh@21 -- # val= 00:07:58.148 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.148 06:48:19 -- accel/accel.sh@21 -- # val= 00:07:58.148 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.148 06:48:19 -- accel/accel.sh@21 -- # val=decompress 00:07:58.148 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.148 06:48:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.148 06:48:19 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:58.148 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.148 06:48:19 -- accel/accel.sh@21 -- # val= 00:07:58.148 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.148 06:48:19 -- accel/accel.sh@21 -- # val=software 00:07:58.148 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.148 06:48:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.148 06:48:19 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:58.148 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.148 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.148 06:48:19 -- accel/accel.sh@21 -- # val=32 00:07:58.148 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.149 06:48:19 -- accel/accel.sh@21 -- # val=32 00:07:58.149 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.149 06:48:19 -- accel/accel.sh@21 -- # val=2 00:07:58.149 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.149 06:48:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:58.149 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.149 06:48:19 -- accel/accel.sh@21 -- # val=Yes 00:07:58.149 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.149 06:48:19 -- accel/accel.sh@21 -- # val= 00:07:58.149 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:58.149 06:48:19 -- accel/accel.sh@21 -- # val= 00:07:58.149 06:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:58.149 06:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:59.085 06:48:20 -- accel/accel.sh@21 -- # val= 00:07:59.085 06:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.085 06:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.085 06:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.085 06:48:20 -- accel/accel.sh@21 -- # val= 00:07:59.085 06:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.085 06:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.085 06:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.085 06:48:20 -- accel/accel.sh@21 -- # val= 00:07:59.085 06:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.085 06:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.085 06:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.085 06:48:20 -- accel/accel.sh@21 -- # val= 00:07:59.085 06:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.085 06:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.085 06:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.085 06:48:20 -- accel/accel.sh@21 -- # val= 00:07:59.085 06:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.085 06:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.085 06:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.085 06:48:20 -- accel/accel.sh@21 -- # val= 00:07:59.085 06:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.085 06:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.085 06:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.085 06:48:20 -- accel/accel.sh@21 -- # val= 00:07:59.344 06:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.344 06:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:59.344 06:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:59.344 06:48:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.344 06:48:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:59.345 06:48:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.345 00:07:59.345 real 0m2.655s 00:07:59.345 user 0m2.398s 00:07:59.345 sys 0m0.265s 00:07:59.345 06:48:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.345 06:48:20 -- common/autotest_common.sh@10 -- # set +x 00:07:59.345 ************************************ 00:07:59.345 END TEST accel_deomp_full_mthread 00:07:59.345 ************************************ 00:07:59.345 06:48:20 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:59.345 06:48:20 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:59.345 06:48:20 -- accel/accel.sh@129 -- # build_accel_config 00:07:59.345 06:48:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:59.345 06:48:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.345 06:48:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.345 06:48:20 -- common/autotest_common.sh@10 -- # set +x 00:07:59.345 06:48:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.345 06:48:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.345 06:48:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.345 06:48:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.345 06:48:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.345 06:48:20 -- accel/accel.sh@42 -- # jq -r . 00:07:59.345 ************************************ 00:07:59.345 START TEST accel_dif_functional_tests 00:07:59.345 ************************************ 00:07:59.345 06:48:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:59.345 [2024-12-15 06:48:20.819987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.345 [2024-12-15 06:48:20.820040] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204860 ] 00:07:59.345 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.345 [2024-12-15 06:48:20.889777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.345 [2024-12-15 06:48:20.925181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.345 [2024-12-15 06:48:20.925276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.345 [2024-12-15 06:48:20.925276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.604 00:07:59.604 00:07:59.604 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.604 http://cunit.sourceforge.net/ 00:07:59.604 00:07:59.604 00:07:59.604 Suite: accel_dif 00:07:59.604 Test: verify: DIF generated, GUARD check ...passed 00:07:59.604 Test: verify: DIF generated, APPTAG check ...passed 00:07:59.604 Test: verify: DIF generated, REFTAG check ...passed 00:07:59.604 Test: verify: DIF not generated, GUARD check ...[2024-12-15 06:48:20.988673] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:59.604 [2024-12-15 06:48:20.988720] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:59.604 passed 00:07:59.604 Test: verify: DIF not generated, APPTAG check ...[2024-12-15 06:48:20.988751] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:59.604 [2024-12-15 06:48:20.988769] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:59.604 passed 00:07:59.604 Test: verify: DIF not generated, REFTAG check ...[2024-12-15 06:48:20.988787] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:59.604 [2024-12-15 06:48:20.988804] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:59.604 passed 00:07:59.604 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:59.604 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-15 06:48:20.988847] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:59.604 passed 00:07:59.604 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:59.604 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:59.604 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:59.604 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-15 06:48:20.988965] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:59.604 passed 00:07:59.604 Test: generate copy: DIF generated, GUARD check ...passed 00:07:59.604 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:59.604 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:59.604 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:59.604 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:59.604 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:59.604 Test: generate copy: iovecs-len validate ...[2024-12-15 06:48:20.989157] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:59.604 passed 00:07:59.604 Test: generate copy: buffer alignment validate ...passed 00:07:59.604 00:07:59.604 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.604 suites 1 1 n/a 0 0 00:07:59.604 tests 20 20 20 0 0 00:07:59.604 asserts 204 204 204 0 n/a 00:07:59.604 00:07:59.604 Elapsed time = 0.002 seconds 00:07:59.604 00:07:59.604 real 0m0.368s 00:07:59.604 user 0m0.544s 00:07:59.604 sys 0m0.165s 00:07:59.604 06:48:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.604 06:48:21 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 ************************************ 00:07:59.604 END TEST accel_dif_functional_tests 00:07:59.604 ************************************ 00:07:59.604 00:07:59.604 real 0m55.831s 00:07:59.604 user 1m3.416s 00:07:59.604 sys 0m7.148s 00:07:59.604 06:48:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.604 06:48:21 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 ************************************ 00:07:59.604 END TEST accel 00:07:59.604 ************************************ 00:07:59.604 06:48:21 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:59.604 06:48:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:59.604 06:48:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.604 06:48:21 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 ************************************ 00:07:59.604 START TEST accel_rpc 00:07:59.604 ************************************ 00:07:59.604 06:48:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:59.864 * Looking for test storage... 00:07:59.864 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:59.864 06:48:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:59.864 06:48:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:59.864 06:48:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:59.864 06:48:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:59.864 06:48:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:59.864 06:48:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:59.864 06:48:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:59.864 06:48:21 -- scripts/common.sh@335 -- # IFS=.-: 00:07:59.864 06:48:21 -- scripts/common.sh@335 -- # read -ra ver1 00:07:59.864 06:48:21 -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.864 06:48:21 -- scripts/common.sh@336 -- # read -ra ver2 00:07:59.864 06:48:21 -- scripts/common.sh@337 -- # local 'op=<' 00:07:59.864 06:48:21 -- scripts/common.sh@339 -- # ver1_l=2 00:07:59.864 06:48:21 -- scripts/common.sh@340 -- # ver2_l=1 00:07:59.864 06:48:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:59.864 06:48:21 -- scripts/common.sh@343 -- # case "$op" in 00:07:59.864 06:48:21 -- scripts/common.sh@344 -- # : 1 00:07:59.864 06:48:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:59.864 06:48:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.864 06:48:21 -- scripts/common.sh@364 -- # decimal 1 00:07:59.864 06:48:21 -- scripts/common.sh@352 -- # local d=1 00:07:59.864 06:48:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.864 06:48:21 -- scripts/common.sh@354 -- # echo 1 00:07:59.864 06:48:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:59.864 06:48:21 -- scripts/common.sh@365 -- # decimal 2 00:07:59.864 06:48:21 -- scripts/common.sh@352 -- # local d=2 00:07:59.864 06:48:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.864 06:48:21 -- scripts/common.sh@354 -- # echo 2 00:07:59.864 06:48:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:59.864 06:48:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:59.864 06:48:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:59.864 06:48:21 -- scripts/common.sh@367 -- # return 0 00:07:59.864 06:48:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.864 06:48:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:59.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.864 --rc genhtml_branch_coverage=1 00:07:59.864 --rc genhtml_function_coverage=1 00:07:59.864 --rc genhtml_legend=1 00:07:59.864 --rc geninfo_all_blocks=1 00:07:59.864 --rc geninfo_unexecuted_blocks=1 00:07:59.864 00:07:59.864 ' 00:07:59.864 06:48:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:59.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.864 --rc genhtml_branch_coverage=1 00:07:59.864 --rc genhtml_function_coverage=1 00:07:59.864 --rc genhtml_legend=1 00:07:59.864 --rc geninfo_all_blocks=1 00:07:59.864 --rc geninfo_unexecuted_blocks=1 00:07:59.864 00:07:59.864 ' 00:07:59.864 06:48:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:59.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.864 --rc genhtml_branch_coverage=1 00:07:59.864 --rc genhtml_function_coverage=1 00:07:59.864 --rc genhtml_legend=1 00:07:59.864 --rc geninfo_all_blocks=1 00:07:59.864 --rc geninfo_unexecuted_blocks=1 00:07:59.864 00:07:59.864 ' 00:07:59.864 06:48:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:59.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.864 --rc genhtml_branch_coverage=1 00:07:59.864 --rc genhtml_function_coverage=1 00:07:59.864 --rc genhtml_legend=1 00:07:59.864 --rc geninfo_all_blocks=1 00:07:59.864 --rc geninfo_unexecuted_blocks=1 00:07:59.864 00:07:59.864 ' 00:07:59.864 06:48:21 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:59.864 06:48:21 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1204937 00:07:59.864 06:48:21 -- accel/accel_rpc.sh@15 -- # waitforlisten 1204937 00:07:59.864 06:48:21 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:59.864 06:48:21 -- common/autotest_common.sh@829 -- # '[' -z 1204937 ']' 00:07:59.864 06:48:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.864 06:48:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.864 06:48:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.864 06:48:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.864 06:48:21 -- common/autotest_common.sh@10 -- # set +x 00:07:59.864 [2024-12-15 06:48:21.466611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.864 [2024-12-15 06:48:21.466660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204937 ] 00:07:59.864 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.123 [2024-12-15 06:48:21.536252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.123 [2024-12-15 06:48:21.572357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:00.123 [2024-12-15 06:48:21.572486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.123 06:48:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.123 06:48:21 -- common/autotest_common.sh@862 -- # return 0 00:08:00.123 06:48:21 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:00.123 06:48:21 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:00.123 06:48:21 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:00.124 06:48:21 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:00.124 06:48:21 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:00.124 06:48:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:00.124 06:48:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.124 06:48:21 -- common/autotest_common.sh@10 -- # set +x 00:08:00.124 ************************************ 00:08:00.124 START TEST accel_assign_opcode 00:08:00.124 ************************************ 00:08:00.124 06:48:21 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:08:00.124 06:48:21 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:00.124 06:48:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.124 06:48:21 -- common/autotest_common.sh@10 -- # set +x 00:08:00.124 [2024-12-15 06:48:21.620902] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:00.124 06:48:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.124 06:48:21 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:00.124 06:48:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.124 06:48:21 -- common/autotest_common.sh@10 -- # set +x 00:08:00.124 [2024-12-15 06:48:21.628913] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:00.124 06:48:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.124 06:48:21 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:00.124 06:48:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.124 06:48:21 -- common/autotest_common.sh@10 -- # set +x 00:08:00.383 06:48:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.383 06:48:21 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:00.383 06:48:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.383 06:48:21 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:00.383 06:48:21 -- common/autotest_common.sh@10 -- # set +x 00:08:00.383 06:48:21 -- accel/accel_rpc.sh@42 -- # grep software 00:08:00.383 06:48:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.383 software 00:08:00.383 00:08:00.383 real 0m0.216s 00:08:00.383 user 0m0.036s 00:08:00.383 sys 0m0.015s 00:08:00.383 06:48:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:00.383 06:48:21 -- common/autotest_common.sh@10 -- # set +x 00:08:00.383 ************************************ 00:08:00.383 END TEST accel_assign_opcode 00:08:00.383 ************************************ 00:08:00.383 06:48:21 -- accel/accel_rpc.sh@55 -- # killprocess 1204937 00:08:00.383 06:48:21 -- common/autotest_common.sh@936 -- # '[' -z 1204937 ']' 00:08:00.383 06:48:21 -- common/autotest_common.sh@940 -- # kill -0 1204937 00:08:00.383 06:48:21 -- common/autotest_common.sh@941 -- # uname 00:08:00.383 06:48:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:00.383 06:48:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1204937 00:08:00.383 06:48:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:00.383 06:48:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:00.383 06:48:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1204937' 00:08:00.383 killing process with pid 1204937 00:08:00.383 06:48:21 -- common/autotest_common.sh@955 -- # kill 1204937 00:08:00.383 06:48:21 -- common/autotest_common.sh@960 -- # wait 1204937 00:08:00.642 00:08:00.642 real 0m1.004s 00:08:00.642 user 0m0.891s 00:08:00.642 sys 0m0.471s 00:08:00.642 06:48:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:00.642 06:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:00.642 ************************************ 00:08:00.642 END TEST accel_rpc 00:08:00.642 ************************************ 00:08:00.642 06:48:22 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:00.642 06:48:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:00.642 06:48:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.642 06:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:00.902 ************************************ 00:08:00.902 START TEST app_cmdline 00:08:00.902 ************************************ 00:08:00.902 06:48:22 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:00.902 * Looking for test storage... 00:08:00.902 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:00.902 06:48:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:00.902 06:48:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:00.902 06:48:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:00.902 06:48:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:00.902 06:48:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:00.902 06:48:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:00.902 06:48:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:00.902 06:48:22 -- scripts/common.sh@335 -- # IFS=.-: 00:08:00.902 06:48:22 -- scripts/common.sh@335 -- # read -ra ver1 00:08:00.902 06:48:22 -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.902 06:48:22 -- scripts/common.sh@336 -- # read -ra ver2 00:08:00.902 06:48:22 -- scripts/common.sh@337 -- # local 'op=<' 00:08:00.902 06:48:22 -- scripts/common.sh@339 -- # ver1_l=2 00:08:00.902 06:48:22 -- scripts/common.sh@340 -- # ver2_l=1 00:08:00.902 06:48:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:00.902 06:48:22 -- scripts/common.sh@343 -- # case "$op" in 00:08:00.902 06:48:22 -- scripts/common.sh@344 -- # : 1 00:08:00.902 06:48:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:00.902 06:48:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.902 06:48:22 -- scripts/common.sh@364 -- # decimal 1 00:08:00.902 06:48:22 -- scripts/common.sh@352 -- # local d=1 00:08:00.902 06:48:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.902 06:48:22 -- scripts/common.sh@354 -- # echo 1 00:08:00.902 06:48:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:00.902 06:48:22 -- scripts/common.sh@365 -- # decimal 2 00:08:00.902 06:48:22 -- scripts/common.sh@352 -- # local d=2 00:08:00.902 06:48:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.902 06:48:22 -- scripts/common.sh@354 -- # echo 2 00:08:00.902 06:48:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:00.902 06:48:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:00.902 06:48:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:00.902 06:48:22 -- scripts/common.sh@367 -- # return 0 00:08:00.902 06:48:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.902 06:48:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:00.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.902 --rc genhtml_branch_coverage=1 00:08:00.902 --rc genhtml_function_coverage=1 00:08:00.902 --rc genhtml_legend=1 00:08:00.902 --rc geninfo_all_blocks=1 00:08:00.902 --rc geninfo_unexecuted_blocks=1 00:08:00.902 00:08:00.902 ' 00:08:00.902 06:48:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:00.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.902 --rc genhtml_branch_coverage=1 00:08:00.902 --rc genhtml_function_coverage=1 00:08:00.902 --rc genhtml_legend=1 00:08:00.902 --rc geninfo_all_blocks=1 00:08:00.902 --rc geninfo_unexecuted_blocks=1 00:08:00.902 00:08:00.902 ' 00:08:00.902 06:48:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:00.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.902 --rc genhtml_branch_coverage=1 00:08:00.902 --rc genhtml_function_coverage=1 00:08:00.902 --rc genhtml_legend=1 00:08:00.902 --rc geninfo_all_blocks=1 00:08:00.902 --rc geninfo_unexecuted_blocks=1 00:08:00.902 00:08:00.902 ' 00:08:00.902 06:48:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:00.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.902 --rc genhtml_branch_coverage=1 00:08:00.902 --rc genhtml_function_coverage=1 00:08:00.902 --rc genhtml_legend=1 00:08:00.902 --rc geninfo_all_blocks=1 00:08:00.902 --rc geninfo_unexecuted_blocks=1 00:08:00.902 00:08:00.902 ' 00:08:00.902 06:48:22 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:00.902 06:48:22 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1205277 00:08:00.902 06:48:22 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:00.902 06:48:22 -- app/cmdline.sh@18 -- # waitforlisten 1205277 00:08:00.902 06:48:22 -- common/autotest_common.sh@829 -- # '[' -z 1205277 ']' 00:08:00.902 06:48:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.902 06:48:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.902 06:48:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.902 06:48:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.902 06:48:22 -- common/autotest_common.sh@10 -- # set +x 00:08:00.902 [2024-12-15 06:48:22.519231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:00.902 [2024-12-15 06:48:22.519282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205277 ] 00:08:01.161 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.161 [2024-12-15 06:48:22.588412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.161 [2024-12-15 06:48:22.625256] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:01.161 [2024-12-15 06:48:22.625379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.728 06:48:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:01.728 06:48:23 -- common/autotest_common.sh@862 -- # return 0 00:08:01.728 06:48:23 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:01.987 { 00:08:01.987 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:08:01.987 "fields": { 00:08:01.987 "major": 24, 00:08:01.987 "minor": 1, 00:08:01.987 "patch": 1, 00:08:01.987 "suffix": "-pre", 00:08:01.987 "commit": "c13c99a5e" 00:08:01.987 } 00:08:01.987 } 00:08:01.987 06:48:23 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:01.987 06:48:23 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:01.987 06:48:23 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:01.987 06:48:23 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:01.987 06:48:23 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:01.987 06:48:23 -- app/cmdline.sh@26 -- # sort 00:08:01.987 06:48:23 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:01.987 06:48:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.987 06:48:23 -- common/autotest_common.sh@10 -- # set +x 00:08:01.987 06:48:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.987 06:48:23 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:01.987 06:48:23 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:01.987 06:48:23 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.987 06:48:23 -- common/autotest_common.sh@650 -- # local es=0 00:08:01.987 06:48:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.987 06:48:23 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:01.987 06:48:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.987 06:48:23 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:01.987 06:48:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.987 06:48:23 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:01.987 06:48:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.987 06:48:23 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:01.987 06:48:23 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:01.987 06:48:23 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:02.246 request: 00:08:02.246 { 00:08:02.246 "method": "env_dpdk_get_mem_stats", 00:08:02.246 "req_id": 1 00:08:02.246 } 00:08:02.246 Got JSON-RPC error response 00:08:02.246 response: 00:08:02.246 { 00:08:02.246 "code": -32601, 00:08:02.246 "message": "Method not found" 00:08:02.246 } 00:08:02.246 06:48:23 -- common/autotest_common.sh@653 -- # es=1 00:08:02.246 06:48:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:02.246 06:48:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:02.246 06:48:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:02.246 06:48:23 -- app/cmdline.sh@1 -- # killprocess 1205277 00:08:02.246 06:48:23 -- common/autotest_common.sh@936 -- # '[' -z 1205277 ']' 00:08:02.246 06:48:23 -- common/autotest_common.sh@940 -- # kill -0 1205277 00:08:02.246 06:48:23 -- common/autotest_common.sh@941 -- # uname 00:08:02.246 06:48:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:02.246 06:48:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1205277 00:08:02.246 06:48:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:02.246 06:48:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:02.246 06:48:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1205277' 00:08:02.246 killing process with pid 1205277 00:08:02.246 06:48:23 -- common/autotest_common.sh@955 -- # kill 1205277 00:08:02.246 06:48:23 -- common/autotest_common.sh@960 -- # wait 1205277 00:08:02.506 00:08:02.506 real 0m1.803s 00:08:02.506 user 0m2.081s 00:08:02.506 sys 0m0.520s 00:08:02.506 06:48:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.506 06:48:24 -- common/autotest_common.sh@10 -- # set +x 00:08:02.506 ************************************ 00:08:02.506 END TEST app_cmdline 00:08:02.506 ************************************ 00:08:02.506 06:48:24 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:02.506 06:48:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:02.506 06:48:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.506 06:48:24 -- common/autotest_common.sh@10 -- # set +x 00:08:02.506 ************************************ 00:08:02.506 START TEST version 00:08:02.506 ************************************ 00:08:02.506 06:48:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:02.765 * Looking for test storage... 00:08:02.765 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:02.765 06:48:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:02.765 06:48:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:02.765 06:48:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:02.765 06:48:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:02.765 06:48:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:02.765 06:48:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:02.765 06:48:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:02.765 06:48:24 -- scripts/common.sh@335 -- # IFS=.-: 00:08:02.765 06:48:24 -- scripts/common.sh@335 -- # read -ra ver1 00:08:02.765 06:48:24 -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.765 06:48:24 -- scripts/common.sh@336 -- # read -ra ver2 00:08:02.765 06:48:24 -- scripts/common.sh@337 -- # local 'op=<' 00:08:02.765 06:48:24 -- scripts/common.sh@339 -- # ver1_l=2 00:08:02.765 06:48:24 -- scripts/common.sh@340 -- # ver2_l=1 00:08:02.765 06:48:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:02.765 06:48:24 -- scripts/common.sh@343 -- # case "$op" in 00:08:02.765 06:48:24 -- scripts/common.sh@344 -- # : 1 00:08:02.765 06:48:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:02.765 06:48:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.765 06:48:24 -- scripts/common.sh@364 -- # decimal 1 00:08:02.765 06:48:24 -- scripts/common.sh@352 -- # local d=1 00:08:02.765 06:48:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.765 06:48:24 -- scripts/common.sh@354 -- # echo 1 00:08:02.765 06:48:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:02.765 06:48:24 -- scripts/common.sh@365 -- # decimal 2 00:08:02.765 06:48:24 -- scripts/common.sh@352 -- # local d=2 00:08:02.765 06:48:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.765 06:48:24 -- scripts/common.sh@354 -- # echo 2 00:08:02.765 06:48:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:02.765 06:48:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:02.765 06:48:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:02.765 06:48:24 -- scripts/common.sh@367 -- # return 0 00:08:02.765 06:48:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.765 06:48:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:02.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.765 --rc genhtml_branch_coverage=1 00:08:02.765 --rc genhtml_function_coverage=1 00:08:02.765 --rc genhtml_legend=1 00:08:02.765 --rc geninfo_all_blocks=1 00:08:02.765 --rc geninfo_unexecuted_blocks=1 00:08:02.765 00:08:02.765 ' 00:08:02.765 06:48:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:02.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.765 --rc genhtml_branch_coverage=1 00:08:02.765 --rc genhtml_function_coverage=1 00:08:02.765 --rc genhtml_legend=1 00:08:02.765 --rc geninfo_all_blocks=1 00:08:02.765 --rc geninfo_unexecuted_blocks=1 00:08:02.765 00:08:02.765 ' 00:08:02.765 06:48:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:02.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.765 --rc genhtml_branch_coverage=1 00:08:02.765 --rc genhtml_function_coverage=1 00:08:02.765 --rc genhtml_legend=1 00:08:02.765 --rc geninfo_all_blocks=1 00:08:02.765 --rc geninfo_unexecuted_blocks=1 00:08:02.765 00:08:02.765 ' 00:08:02.765 06:48:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:02.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.765 --rc genhtml_branch_coverage=1 00:08:02.765 --rc genhtml_function_coverage=1 00:08:02.765 --rc genhtml_legend=1 00:08:02.765 --rc geninfo_all_blocks=1 00:08:02.765 --rc geninfo_unexecuted_blocks=1 00:08:02.765 00:08:02.765 ' 00:08:02.765 06:48:24 -- app/version.sh@17 -- # get_header_version major 00:08:02.765 06:48:24 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:02.765 06:48:24 -- app/version.sh@14 -- # cut -f2 00:08:02.765 06:48:24 -- app/version.sh@14 -- # tr -d '"' 00:08:02.765 06:48:24 -- app/version.sh@17 -- # major=24 00:08:02.765 06:48:24 -- app/version.sh@18 -- # get_header_version minor 00:08:02.765 06:48:24 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:02.765 06:48:24 -- app/version.sh@14 -- # cut -f2 00:08:02.765 06:48:24 -- app/version.sh@14 -- # tr -d '"' 00:08:02.765 06:48:24 -- app/version.sh@18 -- # minor=1 00:08:02.765 06:48:24 -- app/version.sh@19 -- # get_header_version patch 00:08:02.765 06:48:24 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:02.765 06:48:24 -- app/version.sh@14 -- # cut -f2 00:08:02.765 06:48:24 -- app/version.sh@14 -- # tr -d '"' 00:08:02.765 06:48:24 -- app/version.sh@19 -- # patch=1 00:08:02.765 06:48:24 -- app/version.sh@20 -- # get_header_version suffix 00:08:02.765 06:48:24 -- app/version.sh@14 -- # cut -f2 00:08:02.765 06:48:24 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:02.765 06:48:24 -- app/version.sh@14 -- # tr -d '"' 00:08:02.765 06:48:24 -- app/version.sh@20 -- # suffix=-pre 00:08:02.765 06:48:24 -- app/version.sh@22 -- # version=24.1 00:08:02.765 06:48:24 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:02.765 06:48:24 -- app/version.sh@25 -- # version=24.1.1 00:08:02.765 06:48:24 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:02.766 06:48:24 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:02.766 06:48:24 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:02.766 06:48:24 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:02.766 06:48:24 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:02.766 00:08:02.766 real 0m0.264s 00:08:02.766 user 0m0.134s 00:08:02.766 sys 0m0.182s 00:08:02.766 06:48:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.766 06:48:24 -- common/autotest_common.sh@10 -- # set +x 00:08:02.766 ************************************ 00:08:02.766 END TEST version 00:08:02.766 ************************************ 00:08:03.035 06:48:24 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:08:03.035 06:48:24 -- spdk/autotest.sh@191 -- # uname -s 00:08:03.035 06:48:24 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:08:03.035 06:48:24 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:03.035 06:48:24 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:03.035 06:48:24 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:08:03.035 06:48:24 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:08:03.035 06:48:24 -- spdk/autotest.sh@255 -- # timing_exit lib 00:08:03.035 06:48:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.035 06:48:24 -- common/autotest_common.sh@10 -- # set +x 00:08:03.035 06:48:24 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:08:03.035 06:48:24 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:03.035 06:48:24 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:08:03.035 06:48:24 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:08:03.035 06:48:24 -- spdk/autotest.sh@278 -- # '[' rdma = rdma ']' 00:08:03.035 06:48:24 -- spdk/autotest.sh@279 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:03.035 06:48:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:03.035 06:48:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.035 06:48:24 -- common/autotest_common.sh@10 -- # set +x 00:08:03.035 ************************************ 00:08:03.035 START TEST nvmf_rdma 00:08:03.035 ************************************ 00:08:03.035 06:48:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:03.035 * Looking for test storage... 00:08:03.035 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:03.035 06:48:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:03.035 06:48:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:03.035 06:48:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:03.035 06:48:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:03.035 06:48:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:03.035 06:48:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:03.035 06:48:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:03.035 06:48:24 -- scripts/common.sh@335 -- # IFS=.-: 00:08:03.035 06:48:24 -- scripts/common.sh@335 -- # read -ra ver1 00:08:03.035 06:48:24 -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.035 06:48:24 -- scripts/common.sh@336 -- # read -ra ver2 00:08:03.035 06:48:24 -- scripts/common.sh@337 -- # local 'op=<' 00:08:03.035 06:48:24 -- scripts/common.sh@339 -- # ver1_l=2 00:08:03.035 06:48:24 -- scripts/common.sh@340 -- # ver2_l=1 00:08:03.035 06:48:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:03.035 06:48:24 -- scripts/common.sh@343 -- # case "$op" in 00:08:03.035 06:48:24 -- scripts/common.sh@344 -- # : 1 00:08:03.035 06:48:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:03.035 06:48:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.035 06:48:24 -- scripts/common.sh@364 -- # decimal 1 00:08:03.035 06:48:24 -- scripts/common.sh@352 -- # local d=1 00:08:03.035 06:48:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.035 06:48:24 -- scripts/common.sh@354 -- # echo 1 00:08:03.035 06:48:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:03.035 06:48:24 -- scripts/common.sh@365 -- # decimal 2 00:08:03.296 06:48:24 -- scripts/common.sh@352 -- # local d=2 00:08:03.296 06:48:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.296 06:48:24 -- scripts/common.sh@354 -- # echo 2 00:08:03.296 06:48:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:03.296 06:48:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:03.296 06:48:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:03.296 06:48:24 -- scripts/common.sh@367 -- # return 0 00:08:03.296 06:48:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.296 06:48:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:03.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.296 --rc genhtml_branch_coverage=1 00:08:03.296 --rc genhtml_function_coverage=1 00:08:03.296 --rc genhtml_legend=1 00:08:03.296 --rc geninfo_all_blocks=1 00:08:03.296 --rc geninfo_unexecuted_blocks=1 00:08:03.296 00:08:03.296 ' 00:08:03.296 06:48:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:03.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.296 --rc genhtml_branch_coverage=1 00:08:03.296 --rc genhtml_function_coverage=1 00:08:03.296 --rc genhtml_legend=1 00:08:03.296 --rc geninfo_all_blocks=1 00:08:03.296 --rc geninfo_unexecuted_blocks=1 00:08:03.296 00:08:03.296 ' 00:08:03.296 06:48:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:03.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.296 --rc genhtml_branch_coverage=1 00:08:03.296 --rc genhtml_function_coverage=1 00:08:03.296 --rc genhtml_legend=1 00:08:03.296 --rc geninfo_all_blocks=1 00:08:03.296 --rc geninfo_unexecuted_blocks=1 00:08:03.296 00:08:03.296 ' 00:08:03.296 06:48:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:03.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.296 --rc genhtml_branch_coverage=1 00:08:03.296 --rc genhtml_function_coverage=1 00:08:03.296 --rc genhtml_legend=1 00:08:03.296 --rc geninfo_all_blocks=1 00:08:03.296 --rc geninfo_unexecuted_blocks=1 00:08:03.296 00:08:03.296 ' 00:08:03.296 06:48:24 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:03.296 06:48:24 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:03.296 06:48:24 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.296 06:48:24 -- nvmf/common.sh@7 -- # uname -s 00:08:03.297 06:48:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.297 06:48:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.297 06:48:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.297 06:48:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.297 06:48:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.297 06:48:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.297 06:48:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.297 06:48:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.297 06:48:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.297 06:48:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.297 06:48:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:03.297 06:48:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:03.297 06:48:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.297 06:48:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.297 06:48:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.297 06:48:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:03.297 06:48:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.297 06:48:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.297 06:48:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.297 06:48:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.297 06:48:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.297 06:48:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.297 06:48:24 -- paths/export.sh@5 -- # export PATH 00:08:03.297 06:48:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.297 06:48:24 -- nvmf/common.sh@46 -- # : 0 00:08:03.297 06:48:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:03.297 06:48:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:03.297 06:48:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:03.297 06:48:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.297 06:48:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.297 06:48:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:03.297 06:48:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:03.297 06:48:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:03.297 06:48:24 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:03.297 06:48:24 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:03.297 06:48:24 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:03.297 06:48:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.297 06:48:24 -- common/autotest_common.sh@10 -- # set +x 00:08:03.297 06:48:24 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:03.297 06:48:24 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:03.297 06:48:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:03.297 06:48:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.297 06:48:24 -- common/autotest_common.sh@10 -- # set +x 00:08:03.297 ************************************ 00:08:03.297 START TEST nvmf_example 00:08:03.297 ************************************ 00:08:03.297 06:48:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:03.297 * Looking for test storage... 00:08:03.297 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:03.297 06:48:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:03.297 06:48:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:03.297 06:48:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:03.297 06:48:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:03.297 06:48:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:03.297 06:48:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:03.297 06:48:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:03.297 06:48:24 -- scripts/common.sh@335 -- # IFS=.-: 00:08:03.297 06:48:24 -- scripts/common.sh@335 -- # read -ra ver1 00:08:03.297 06:48:24 -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.297 06:48:24 -- scripts/common.sh@336 -- # read -ra ver2 00:08:03.297 06:48:24 -- scripts/common.sh@337 -- # local 'op=<' 00:08:03.297 06:48:24 -- scripts/common.sh@339 -- # ver1_l=2 00:08:03.297 06:48:24 -- scripts/common.sh@340 -- # ver2_l=1 00:08:03.297 06:48:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:03.297 06:48:24 -- scripts/common.sh@343 -- # case "$op" in 00:08:03.297 06:48:24 -- scripts/common.sh@344 -- # : 1 00:08:03.297 06:48:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:03.297 06:48:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.297 06:48:24 -- scripts/common.sh@364 -- # decimal 1 00:08:03.297 06:48:24 -- scripts/common.sh@352 -- # local d=1 00:08:03.297 06:48:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.297 06:48:24 -- scripts/common.sh@354 -- # echo 1 00:08:03.297 06:48:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:03.297 06:48:24 -- scripts/common.sh@365 -- # decimal 2 00:08:03.297 06:48:24 -- scripts/common.sh@352 -- # local d=2 00:08:03.297 06:48:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.297 06:48:24 -- scripts/common.sh@354 -- # echo 2 00:08:03.297 06:48:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:03.297 06:48:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:03.297 06:48:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:03.297 06:48:24 -- scripts/common.sh@367 -- # return 0 00:08:03.297 06:48:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.297 06:48:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:03.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.297 --rc genhtml_branch_coverage=1 00:08:03.297 --rc genhtml_function_coverage=1 00:08:03.297 --rc genhtml_legend=1 00:08:03.297 --rc geninfo_all_blocks=1 00:08:03.297 --rc geninfo_unexecuted_blocks=1 00:08:03.297 00:08:03.297 ' 00:08:03.297 06:48:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:03.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.297 --rc genhtml_branch_coverage=1 00:08:03.297 --rc genhtml_function_coverage=1 00:08:03.297 --rc genhtml_legend=1 00:08:03.297 --rc geninfo_all_blocks=1 00:08:03.297 --rc geninfo_unexecuted_blocks=1 00:08:03.297 00:08:03.297 ' 00:08:03.297 06:48:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:03.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.297 --rc genhtml_branch_coverage=1 00:08:03.297 --rc genhtml_function_coverage=1 00:08:03.297 --rc genhtml_legend=1 00:08:03.297 --rc geninfo_all_blocks=1 00:08:03.297 --rc geninfo_unexecuted_blocks=1 00:08:03.297 00:08:03.297 ' 00:08:03.297 06:48:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:03.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.297 --rc genhtml_branch_coverage=1 00:08:03.297 --rc genhtml_function_coverage=1 00:08:03.297 --rc genhtml_legend=1 00:08:03.297 --rc geninfo_all_blocks=1 00:08:03.297 --rc geninfo_unexecuted_blocks=1 00:08:03.297 00:08:03.297 ' 00:08:03.297 06:48:24 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.297 06:48:24 -- nvmf/common.sh@7 -- # uname -s 00:08:03.297 06:48:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.297 06:48:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.297 06:48:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.297 06:48:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.297 06:48:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.297 06:48:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.297 06:48:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.297 06:48:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.297 06:48:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.297 06:48:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.557 06:48:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:03.557 06:48:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:03.557 06:48:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.557 06:48:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.557 06:48:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.557 06:48:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:03.557 06:48:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.557 06:48:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.557 06:48:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.557 06:48:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.557 06:48:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.557 06:48:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.557 06:48:24 -- paths/export.sh@5 -- # export PATH 00:08:03.557 06:48:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.557 06:48:24 -- nvmf/common.sh@46 -- # : 0 00:08:03.557 06:48:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:03.557 06:48:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:03.557 06:48:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:03.557 06:48:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.557 06:48:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.557 06:48:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:03.557 06:48:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:03.557 06:48:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:03.557 06:48:24 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:03.557 06:48:24 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:03.557 06:48:24 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:03.557 06:48:24 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:03.557 06:48:24 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:03.557 06:48:24 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:03.557 06:48:24 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:03.557 06:48:24 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:03.557 06:48:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.557 06:48:24 -- common/autotest_common.sh@10 -- # set +x 00:08:03.557 06:48:24 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:03.557 06:48:24 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:03.557 06:48:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.557 06:48:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:03.557 06:48:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:03.557 06:48:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:03.557 06:48:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.557 06:48:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.557 06:48:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.557 06:48:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:03.557 06:48:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:03.557 06:48:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:03.557 06:48:24 -- common/autotest_common.sh@10 -- # set +x 00:08:10.123 06:48:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:10.123 06:48:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:10.123 06:48:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:10.123 06:48:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:10.123 06:48:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:10.123 06:48:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:10.123 06:48:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:10.123 06:48:31 -- nvmf/common.sh@294 -- # net_devs=() 00:08:10.123 06:48:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:10.123 06:48:31 -- nvmf/common.sh@295 -- # e810=() 00:08:10.123 06:48:31 -- nvmf/common.sh@295 -- # local -ga e810 00:08:10.123 06:48:31 -- nvmf/common.sh@296 -- # x722=() 00:08:10.123 06:48:31 -- nvmf/common.sh@296 -- # local -ga x722 00:08:10.123 06:48:31 -- nvmf/common.sh@297 -- # mlx=() 00:08:10.123 06:48:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:10.123 06:48:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.123 06:48:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.123 06:48:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.123 06:48:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.123 06:48:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.123 06:48:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.123 06:48:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.123 06:48:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.123 06:48:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.123 06:48:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.123 06:48:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.123 06:48:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:10.123 06:48:31 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:10.123 06:48:31 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:10.123 06:48:31 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:10.123 06:48:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:10.123 06:48:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:10.123 06:48:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:10.123 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:10.123 06:48:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:10.123 06:48:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:10.123 06:48:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:10.123 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:10.123 06:48:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:10.123 06:48:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:10.123 06:48:31 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:10.123 06:48:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.123 06:48:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:10.123 06:48:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.123 06:48:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:10.123 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:10.123 06:48:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.123 06:48:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:10.123 06:48:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.123 06:48:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:10.123 06:48:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.123 06:48:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:10.123 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:10.123 06:48:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.123 06:48:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:10.123 06:48:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:10.123 06:48:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:10.123 06:48:31 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:10.124 06:48:31 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:10.124 06:48:31 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:10.124 06:48:31 -- nvmf/common.sh@57 -- # uname 00:08:10.124 06:48:31 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:10.124 06:48:31 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:10.124 06:48:31 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:10.124 06:48:31 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:10.124 06:48:31 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:10.124 06:48:31 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:10.124 06:48:31 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:10.124 06:48:31 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:10.124 06:48:31 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:10.124 06:48:31 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:10.124 06:48:31 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:10.124 06:48:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:10.124 06:48:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:10.124 06:48:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:10.124 06:48:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:10.124 06:48:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:10.124 06:48:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:10.124 06:48:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.124 06:48:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:10.124 06:48:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:10.124 06:48:31 -- nvmf/common.sh@104 -- # continue 2 00:08:10.124 06:48:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:10.124 06:48:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.124 06:48:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:10.124 06:48:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.124 06:48:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:10.124 06:48:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:10.124 06:48:31 -- nvmf/common.sh@104 -- # continue 2 00:08:10.124 06:48:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:10.124 06:48:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:10.124 06:48:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:10.124 06:48:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:10.124 06:48:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:10.124 06:48:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:10.124 06:48:31 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:10.124 06:48:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:10.124 06:48:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:10.124 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:10.124 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:10.124 altname enp217s0f0np0 00:08:10.124 altname ens818f0np0 00:08:10.124 inet 192.168.100.8/24 scope global mlx_0_0 00:08:10.124 valid_lft forever preferred_lft forever 00:08:10.124 06:48:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:10.124 06:48:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:10.124 06:48:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:10.124 06:48:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:10.124 06:48:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:10.124 06:48:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:10.124 06:48:31 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:10.124 06:48:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:10.124 06:48:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:10.124 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:10.124 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:10.124 altname enp217s0f1np1 00:08:10.124 altname ens818f1np1 00:08:10.124 inet 192.168.100.9/24 scope global mlx_0_1 00:08:10.124 valid_lft forever preferred_lft forever 00:08:10.124 06:48:31 -- nvmf/common.sh@410 -- # return 0 00:08:10.124 06:48:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:10.124 06:48:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:10.124 06:48:31 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:10.124 06:48:31 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:10.124 06:48:31 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:10.124 06:48:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:10.124 06:48:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:10.124 06:48:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:10.124 06:48:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:10.124 06:48:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:10.124 06:48:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:10.124 06:48:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.124 06:48:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:10.124 06:48:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:10.124 06:48:31 -- nvmf/common.sh@104 -- # continue 2 00:08:10.124 06:48:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:10.124 06:48:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.124 06:48:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:10.124 06:48:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.124 06:48:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:10.124 06:48:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:10.124 06:48:31 -- nvmf/common.sh@104 -- # continue 2 00:08:10.124 06:48:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:10.124 06:48:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:10.124 06:48:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:10.124 06:48:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:10.124 06:48:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:10.124 06:48:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:10.124 06:48:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:10.124 06:48:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:10.124 06:48:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:10.124 06:48:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:10.124 06:48:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:10.124 06:48:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:10.124 06:48:31 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:10.124 192.168.100.9' 00:08:10.124 06:48:31 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:10.124 192.168.100.9' 00:08:10.124 06:48:31 -- nvmf/common.sh@445 -- # head -n 1 00:08:10.124 06:48:31 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:10.124 06:48:31 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:10.124 192.168.100.9' 00:08:10.124 06:48:31 -- nvmf/common.sh@446 -- # tail -n +2 00:08:10.124 06:48:31 -- nvmf/common.sh@446 -- # head -n 1 00:08:10.124 06:48:31 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:10.124 06:48:31 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:10.124 06:48:31 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:10.124 06:48:31 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:10.124 06:48:31 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:10.124 06:48:31 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:10.124 06:48:31 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:10.124 06:48:31 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:10.124 06:48:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.124 06:48:31 -- common/autotest_common.sh@10 -- # set +x 00:08:10.124 06:48:31 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:10.124 06:48:31 -- target/nvmf_example.sh@34 -- # nvmfpid=1209121 00:08:10.124 06:48:31 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:10.124 06:48:31 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.124 06:48:31 -- target/nvmf_example.sh@36 -- # waitforlisten 1209121 00:08:10.124 06:48:31 -- common/autotest_common.sh@829 -- # '[' -z 1209121 ']' 00:08:10.124 06:48:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.124 06:48:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.124 06:48:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.124 06:48:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.124 06:48:31 -- common/autotest_common.sh@10 -- # set +x 00:08:10.124 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.062 06:48:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.062 06:48:32 -- common/autotest_common.sh@862 -- # return 0 00:08:11.062 06:48:32 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:11.062 06:48:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.062 06:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.062 06:48:32 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:11.062 06:48:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.062 06:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.062 06:48:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.062 06:48:32 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:11.062 06:48:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.062 06:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.062 06:48:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.062 06:48:32 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:11.062 06:48:32 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:11.062 06:48:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.062 06:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.062 06:48:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.062 06:48:32 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:11.062 06:48:32 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:11.062 06:48:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.062 06:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.062 06:48:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.062 06:48:32 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:11.062 06:48:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.062 06:48:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.062 06:48:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.062 06:48:32 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:11.062 06:48:32 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:11.062 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.273 Initializing NVMe Controllers 00:08:23.273 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:23.273 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:23.273 Initialization complete. Launching workers. 00:08:23.273 ======================================================== 00:08:23.273 Latency(us) 00:08:23.273 Device Information : IOPS MiB/s Average min max 00:08:23.273 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26986.72 105.42 2371.24 589.44 12083.88 00:08:23.273 ======================================================== 00:08:23.273 Total : 26986.72 105.42 2371.24 589.44 12083.88 00:08:23.273 00:08:23.273 06:48:43 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:23.273 06:48:43 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:23.273 06:48:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:23.273 06:48:43 -- nvmf/common.sh@116 -- # sync 00:08:23.273 06:48:43 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:23.273 06:48:43 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:23.273 06:48:43 -- nvmf/common.sh@119 -- # set +e 00:08:23.273 06:48:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:23.273 06:48:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:23.273 rmmod nvme_rdma 00:08:23.273 rmmod nvme_fabrics 00:08:23.273 06:48:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:23.273 06:48:43 -- nvmf/common.sh@123 -- # set -e 00:08:23.273 06:48:43 -- nvmf/common.sh@124 -- # return 0 00:08:23.273 06:48:43 -- nvmf/common.sh@477 -- # '[' -n 1209121 ']' 00:08:23.273 06:48:43 -- nvmf/common.sh@478 -- # killprocess 1209121 00:08:23.273 06:48:43 -- common/autotest_common.sh@936 -- # '[' -z 1209121 ']' 00:08:23.273 06:48:43 -- common/autotest_common.sh@940 -- # kill -0 1209121 00:08:23.273 06:48:43 -- common/autotest_common.sh@941 -- # uname 00:08:23.273 06:48:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:23.273 06:48:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1209121 00:08:23.273 06:48:43 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:23.273 06:48:43 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:23.273 06:48:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1209121' 00:08:23.273 killing process with pid 1209121 00:08:23.273 06:48:43 -- common/autotest_common.sh@955 -- # kill 1209121 00:08:23.273 06:48:43 -- common/autotest_common.sh@960 -- # wait 1209121 00:08:23.273 nvmf threads initialize successfully 00:08:23.273 bdev subsystem init successfully 00:08:23.273 created a nvmf target service 00:08:23.273 create targets's poll groups done 00:08:23.273 all subsystems of target started 00:08:23.273 nvmf target is running 00:08:23.273 all subsystems of target stopped 00:08:23.273 destroy targets's poll groups done 00:08:23.273 destroyed the nvmf target service 00:08:23.273 bdev subsystem finish successfully 00:08:23.273 nvmf threads destroy successfully 00:08:23.273 06:48:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:23.273 06:48:44 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:23.273 06:48:44 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:23.273 06:48:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.273 06:48:44 -- common/autotest_common.sh@10 -- # set +x 00:08:23.273 00:08:23.273 real 0m19.524s 00:08:23.273 user 0m52.075s 00:08:23.273 sys 0m5.598s 00:08:23.273 06:48:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.273 06:48:44 -- common/autotest_common.sh@10 -- # set +x 00:08:23.273 ************************************ 00:08:23.273 END TEST nvmf_example 00:08:23.273 ************************************ 00:08:23.273 06:48:44 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:23.273 06:48:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:23.273 06:48:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.273 06:48:44 -- common/autotest_common.sh@10 -- # set +x 00:08:23.273 ************************************ 00:08:23.273 START TEST nvmf_filesystem 00:08:23.273 ************************************ 00:08:23.273 06:48:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:23.273 * Looking for test storage... 00:08:23.273 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:23.273 06:48:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:23.273 06:48:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:23.273 06:48:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:23.273 06:48:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:23.273 06:48:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:23.273 06:48:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:23.273 06:48:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:23.273 06:48:44 -- scripts/common.sh@335 -- # IFS=.-: 00:08:23.273 06:48:44 -- scripts/common.sh@335 -- # read -ra ver1 00:08:23.273 06:48:44 -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.273 06:48:44 -- scripts/common.sh@336 -- # read -ra ver2 00:08:23.273 06:48:44 -- scripts/common.sh@337 -- # local 'op=<' 00:08:23.273 06:48:44 -- scripts/common.sh@339 -- # ver1_l=2 00:08:23.273 06:48:44 -- scripts/common.sh@340 -- # ver2_l=1 00:08:23.273 06:48:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:23.273 06:48:44 -- scripts/common.sh@343 -- # case "$op" in 00:08:23.273 06:48:44 -- scripts/common.sh@344 -- # : 1 00:08:23.273 06:48:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:23.273 06:48:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.273 06:48:44 -- scripts/common.sh@364 -- # decimal 1 00:08:23.273 06:48:44 -- scripts/common.sh@352 -- # local d=1 00:08:23.273 06:48:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.273 06:48:44 -- scripts/common.sh@354 -- # echo 1 00:08:23.273 06:48:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:23.273 06:48:44 -- scripts/common.sh@365 -- # decimal 2 00:08:23.273 06:48:44 -- scripts/common.sh@352 -- # local d=2 00:08:23.273 06:48:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.273 06:48:44 -- scripts/common.sh@354 -- # echo 2 00:08:23.273 06:48:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:23.273 06:48:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:23.273 06:48:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:23.273 06:48:44 -- scripts/common.sh@367 -- # return 0 00:08:23.273 06:48:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.274 06:48:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:23.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.274 --rc genhtml_branch_coverage=1 00:08:23.274 --rc genhtml_function_coverage=1 00:08:23.274 --rc genhtml_legend=1 00:08:23.274 --rc geninfo_all_blocks=1 00:08:23.274 --rc geninfo_unexecuted_blocks=1 00:08:23.274 00:08:23.274 ' 00:08:23.274 06:48:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:23.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.274 --rc genhtml_branch_coverage=1 00:08:23.274 --rc genhtml_function_coverage=1 00:08:23.274 --rc genhtml_legend=1 00:08:23.274 --rc geninfo_all_blocks=1 00:08:23.274 --rc geninfo_unexecuted_blocks=1 00:08:23.274 00:08:23.274 ' 00:08:23.274 06:48:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:23.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.274 --rc genhtml_branch_coverage=1 00:08:23.274 --rc genhtml_function_coverage=1 00:08:23.274 --rc genhtml_legend=1 00:08:23.274 --rc geninfo_all_blocks=1 00:08:23.274 --rc geninfo_unexecuted_blocks=1 00:08:23.274 00:08:23.274 ' 00:08:23.274 06:48:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:23.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.274 --rc genhtml_branch_coverage=1 00:08:23.274 --rc genhtml_function_coverage=1 00:08:23.274 --rc genhtml_legend=1 00:08:23.274 --rc geninfo_all_blocks=1 00:08:23.274 --rc geninfo_unexecuted_blocks=1 00:08:23.274 00:08:23.274 ' 00:08:23.274 06:48:44 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:23.274 06:48:44 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:23.274 06:48:44 -- common/autotest_common.sh@34 -- # set -e 00:08:23.274 06:48:44 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:23.274 06:48:44 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:23.274 06:48:44 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:23.274 06:48:44 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:23.274 06:48:44 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:23.274 06:48:44 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:23.274 06:48:44 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:23.274 06:48:44 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:23.274 06:48:44 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:23.274 06:48:44 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:23.274 06:48:44 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:23.274 06:48:44 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:23.274 06:48:44 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:23.274 06:48:44 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:23.274 06:48:44 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:23.274 06:48:44 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:23.274 06:48:44 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:23.274 06:48:44 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:23.274 06:48:44 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:23.274 06:48:44 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:23.274 06:48:44 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:23.274 06:48:44 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:23.274 06:48:44 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:23.274 06:48:44 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:23.274 06:48:44 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:23.274 06:48:44 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:23.274 06:48:44 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:23.274 06:48:44 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:23.274 06:48:44 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:23.274 06:48:44 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:23.274 06:48:44 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:23.274 06:48:44 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:23.274 06:48:44 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:23.274 06:48:44 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:23.274 06:48:44 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:23.274 06:48:44 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:23.274 06:48:44 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:23.274 06:48:44 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:23.274 06:48:44 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:23.274 06:48:44 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:23.274 06:48:44 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:23.274 06:48:44 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:23.274 06:48:44 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:23.274 06:48:44 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:23.274 06:48:44 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:23.274 06:48:44 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:23.274 06:48:44 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:23.274 06:48:44 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:23.274 06:48:44 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:23.274 06:48:44 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:23.274 06:48:44 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:23.274 06:48:44 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:23.274 06:48:44 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:23.274 06:48:44 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:23.274 06:48:44 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:23.274 06:48:44 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:23.274 06:48:44 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:23.274 06:48:44 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:23.274 06:48:44 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:23.274 06:48:44 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:23.274 06:48:44 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:23.274 06:48:44 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:23.274 06:48:44 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:23.274 06:48:44 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:23.274 06:48:44 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:23.274 06:48:44 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:23.274 06:48:44 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:23.274 06:48:44 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:23.274 06:48:44 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:23.274 06:48:44 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:23.274 06:48:44 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:23.274 06:48:44 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:23.274 06:48:44 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:23.274 06:48:44 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:23.274 06:48:44 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:23.274 06:48:44 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:23.274 06:48:44 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:23.274 06:48:44 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:23.274 06:48:44 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:23.274 06:48:44 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:23.274 06:48:44 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:23.274 06:48:44 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:23.274 06:48:44 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:23.274 06:48:44 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:23.274 06:48:44 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:23.274 06:48:44 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:23.274 06:48:44 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:23.274 06:48:44 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:23.274 06:48:44 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:23.274 06:48:44 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:23.274 06:48:44 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:23.274 06:48:44 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:23.274 06:48:44 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:23.274 06:48:44 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:23.274 06:48:44 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:23.274 06:48:44 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:23.274 06:48:44 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:23.274 06:48:44 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:23.274 06:48:44 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:23.274 #define SPDK_CONFIG_H 00:08:23.274 #define SPDK_CONFIG_APPS 1 00:08:23.274 #define SPDK_CONFIG_ARCH native 00:08:23.274 #undef SPDK_CONFIG_ASAN 00:08:23.274 #undef SPDK_CONFIG_AVAHI 00:08:23.274 #undef SPDK_CONFIG_CET 00:08:23.274 #define SPDK_CONFIG_COVERAGE 1 00:08:23.274 #define SPDK_CONFIG_CROSS_PREFIX 00:08:23.274 #undef SPDK_CONFIG_CRYPTO 00:08:23.274 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:23.274 #undef SPDK_CONFIG_CUSTOMOCF 00:08:23.274 #undef SPDK_CONFIG_DAOS 00:08:23.274 #define SPDK_CONFIG_DAOS_DIR 00:08:23.274 #define SPDK_CONFIG_DEBUG 1 00:08:23.274 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:23.274 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:23.274 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:23.274 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:23.274 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:23.274 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:23.274 #define SPDK_CONFIG_EXAMPLES 1 00:08:23.274 #undef SPDK_CONFIG_FC 00:08:23.274 #define SPDK_CONFIG_FC_PATH 00:08:23.274 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:23.274 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:23.274 #undef SPDK_CONFIG_FUSE 00:08:23.274 #undef SPDK_CONFIG_FUZZER 00:08:23.274 #define SPDK_CONFIG_FUZZER_LIB 00:08:23.274 #undef SPDK_CONFIG_GOLANG 00:08:23.274 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:23.274 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:23.274 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:23.274 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:23.275 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:23.275 #define SPDK_CONFIG_IDXD 1 00:08:23.275 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:23.275 #undef SPDK_CONFIG_IPSEC_MB 00:08:23.275 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:23.275 #define SPDK_CONFIG_ISAL 1 00:08:23.275 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:23.275 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:23.275 #define SPDK_CONFIG_LIBDIR 00:08:23.275 #undef SPDK_CONFIG_LTO 00:08:23.275 #define SPDK_CONFIG_MAX_LCORES 00:08:23.275 #define SPDK_CONFIG_NVME_CUSE 1 00:08:23.275 #undef SPDK_CONFIG_OCF 00:08:23.275 #define SPDK_CONFIG_OCF_PATH 00:08:23.275 #define SPDK_CONFIG_OPENSSL_PATH 00:08:23.275 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:23.275 #undef SPDK_CONFIG_PGO_USE 00:08:23.275 #define SPDK_CONFIG_PREFIX /usr/local 00:08:23.275 #undef SPDK_CONFIG_RAID5F 00:08:23.275 #undef SPDK_CONFIG_RBD 00:08:23.275 #define SPDK_CONFIG_RDMA 1 00:08:23.275 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:23.275 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:23.275 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:23.275 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:23.275 #define SPDK_CONFIG_SHARED 1 00:08:23.275 #undef SPDK_CONFIG_SMA 00:08:23.275 #define SPDK_CONFIG_TESTS 1 00:08:23.275 #undef SPDK_CONFIG_TSAN 00:08:23.275 #define SPDK_CONFIG_UBLK 1 00:08:23.275 #define SPDK_CONFIG_UBSAN 1 00:08:23.275 #undef SPDK_CONFIG_UNIT_TESTS 00:08:23.275 #undef SPDK_CONFIG_URING 00:08:23.275 #define SPDK_CONFIG_URING_PATH 00:08:23.275 #undef SPDK_CONFIG_URING_ZNS 00:08:23.275 #undef SPDK_CONFIG_USDT 00:08:23.275 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:23.275 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:23.275 #undef SPDK_CONFIG_VFIO_USER 00:08:23.275 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:23.275 #define SPDK_CONFIG_VHOST 1 00:08:23.275 #define SPDK_CONFIG_VIRTIO 1 00:08:23.275 #undef SPDK_CONFIG_VTUNE 00:08:23.275 #define SPDK_CONFIG_VTUNE_DIR 00:08:23.275 #define SPDK_CONFIG_WERROR 1 00:08:23.275 #define SPDK_CONFIG_WPDK_DIR 00:08:23.275 #undef SPDK_CONFIG_XNVME 00:08:23.275 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:23.275 06:48:44 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:23.275 06:48:44 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:23.275 06:48:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.275 06:48:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.275 06:48:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.275 06:48:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.275 06:48:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.275 06:48:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.275 06:48:44 -- paths/export.sh@5 -- # export PATH 00:08:23.275 06:48:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.275 06:48:44 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:23.275 06:48:44 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:23.275 06:48:44 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:23.275 06:48:44 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:23.275 06:48:44 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:23.275 06:48:44 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:23.275 06:48:44 -- pm/common@16 -- # TEST_TAG=N/A 00:08:23.275 06:48:44 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:23.275 06:48:44 -- common/autotest_common.sh@52 -- # : 1 00:08:23.275 06:48:44 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:23.275 06:48:44 -- common/autotest_common.sh@56 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:23.275 06:48:44 -- common/autotest_common.sh@58 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:23.275 06:48:44 -- common/autotest_common.sh@60 -- # : 1 00:08:23.275 06:48:44 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:23.275 06:48:44 -- common/autotest_common.sh@62 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:23.275 06:48:44 -- common/autotest_common.sh@64 -- # : 00:08:23.275 06:48:44 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:23.275 06:48:44 -- common/autotest_common.sh@66 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:23.275 06:48:44 -- common/autotest_common.sh@68 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:23.275 06:48:44 -- common/autotest_common.sh@70 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:23.275 06:48:44 -- common/autotest_common.sh@72 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:23.275 06:48:44 -- common/autotest_common.sh@74 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:23.275 06:48:44 -- common/autotest_common.sh@76 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:23.275 06:48:44 -- common/autotest_common.sh@78 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:23.275 06:48:44 -- common/autotest_common.sh@80 -- # : 1 00:08:23.275 06:48:44 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:23.275 06:48:44 -- common/autotest_common.sh@82 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:23.275 06:48:44 -- common/autotest_common.sh@84 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:23.275 06:48:44 -- common/autotest_common.sh@86 -- # : 1 00:08:23.275 06:48:44 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:23.275 06:48:44 -- common/autotest_common.sh@88 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:23.275 06:48:44 -- common/autotest_common.sh@90 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:23.275 06:48:44 -- common/autotest_common.sh@92 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:23.275 06:48:44 -- common/autotest_common.sh@94 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:23.275 06:48:44 -- common/autotest_common.sh@96 -- # : rdma 00:08:23.275 06:48:44 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:23.275 06:48:44 -- common/autotest_common.sh@98 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:23.275 06:48:44 -- common/autotest_common.sh@100 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:23.275 06:48:44 -- common/autotest_common.sh@102 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:23.275 06:48:44 -- common/autotest_common.sh@104 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:23.275 06:48:44 -- common/autotest_common.sh@106 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:23.275 06:48:44 -- common/autotest_common.sh@108 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:23.275 06:48:44 -- common/autotest_common.sh@110 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:23.275 06:48:44 -- common/autotest_common.sh@112 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:23.275 06:48:44 -- common/autotest_common.sh@114 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:23.275 06:48:44 -- common/autotest_common.sh@116 -- # : 1 00:08:23.275 06:48:44 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:23.275 06:48:44 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:23.275 06:48:44 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:23.275 06:48:44 -- common/autotest_common.sh@120 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:23.275 06:48:44 -- common/autotest_common.sh@122 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:23.275 06:48:44 -- common/autotest_common.sh@124 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:23.275 06:48:44 -- common/autotest_common.sh@126 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:23.275 06:48:44 -- common/autotest_common.sh@128 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:23.275 06:48:44 -- common/autotest_common.sh@130 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:23.275 06:48:44 -- common/autotest_common.sh@132 -- # : v22.11.4 00:08:23.275 06:48:44 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:23.275 06:48:44 -- common/autotest_common.sh@134 -- # : true 00:08:23.275 06:48:44 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:23.275 06:48:44 -- common/autotest_common.sh@136 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:23.275 06:48:44 -- common/autotest_common.sh@138 -- # : 0 00:08:23.275 06:48:44 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:23.275 06:48:44 -- common/autotest_common.sh@140 -- # : 0 00:08:23.276 06:48:44 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:23.276 06:48:44 -- common/autotest_common.sh@142 -- # : 0 00:08:23.276 06:48:44 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:23.276 06:48:44 -- common/autotest_common.sh@144 -- # : 0 00:08:23.276 06:48:44 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:23.276 06:48:44 -- common/autotest_common.sh@146 -- # : 0 00:08:23.276 06:48:44 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:23.276 06:48:44 -- common/autotest_common.sh@148 -- # : mlx5 00:08:23.276 06:48:44 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:23.276 06:48:44 -- common/autotest_common.sh@150 -- # : 0 00:08:23.276 06:48:44 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:23.276 06:48:44 -- common/autotest_common.sh@152 -- # : 0 00:08:23.276 06:48:44 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:23.276 06:48:44 -- common/autotest_common.sh@154 -- # : 0 00:08:23.276 06:48:44 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:23.276 06:48:44 -- common/autotest_common.sh@156 -- # : 0 00:08:23.276 06:48:44 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:23.276 06:48:44 -- common/autotest_common.sh@158 -- # : 0 00:08:23.276 06:48:44 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:23.276 06:48:44 -- common/autotest_common.sh@160 -- # : 0 00:08:23.276 06:48:44 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:23.276 06:48:44 -- common/autotest_common.sh@163 -- # : 00:08:23.276 06:48:44 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:23.276 06:48:44 -- common/autotest_common.sh@165 -- # : 0 00:08:23.276 06:48:44 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:23.276 06:48:44 -- common/autotest_common.sh@167 -- # : 0 00:08:23.276 06:48:44 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:23.276 06:48:44 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:23.276 06:48:44 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:23.276 06:48:44 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:23.276 06:48:44 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:23.276 06:48:44 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:23.276 06:48:44 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:23.276 06:48:44 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:23.276 06:48:44 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:23.276 06:48:44 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:23.276 06:48:44 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:23.276 06:48:44 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:23.276 06:48:44 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:23.276 06:48:44 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:23.276 06:48:44 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:23.276 06:48:44 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:23.276 06:48:44 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:23.276 06:48:44 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:23.276 06:48:44 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:23.276 06:48:44 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:23.276 06:48:44 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:23.276 06:48:44 -- common/autotest_common.sh@196 -- # cat 00:08:23.276 06:48:44 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:23.276 06:48:44 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:23.276 06:48:44 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:23.276 06:48:44 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:23.276 06:48:44 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:23.276 06:48:44 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:23.276 06:48:44 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:23.276 06:48:44 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:23.276 06:48:44 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:23.276 06:48:44 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:23.276 06:48:44 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:23.276 06:48:44 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:23.276 06:48:44 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:23.276 06:48:44 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:23.276 06:48:44 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:23.276 06:48:44 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:23.276 06:48:44 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:23.276 06:48:44 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:23.276 06:48:44 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:23.276 06:48:44 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:23.276 06:48:44 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:23.276 06:48:44 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:23.276 06:48:44 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:23.276 06:48:44 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:23.276 06:48:44 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:23.276 06:48:44 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:23.276 06:48:44 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:23.276 06:48:44 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:23.276 06:48:44 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:23.276 06:48:44 -- common/autotest_common.sh@259 -- # valgrind= 00:08:23.276 06:48:44 -- common/autotest_common.sh@265 -- # uname -s 00:08:23.276 06:48:44 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:23.276 06:48:44 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:23.276 06:48:44 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:23.276 06:48:44 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:23.276 06:48:44 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:23.276 06:48:44 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:23.276 06:48:44 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:23.276 06:48:44 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:08:23.276 06:48:44 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:23.276 06:48:44 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:23.276 06:48:44 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:23.276 06:48:44 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:23.276 06:48:44 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:23.276 06:48:44 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:23.276 06:48:44 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:23.276 06:48:44 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=rdma 00:08:23.276 06:48:44 -- common/autotest_common.sh@319 -- # [[ -z 1211366 ]] 00:08:23.276 06:48:44 -- common/autotest_common.sh@319 -- # kill -0 1211366 00:08:23.276 06:48:44 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:23.276 06:48:44 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:23.276 06:48:44 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:23.276 06:48:44 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:23.276 06:48:44 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:23.276 06:48:44 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:23.276 06:48:44 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:23.276 06:48:44 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:23.276 06:48:44 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.xpoUzm 00:08:23.276 06:48:44 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:23.276 06:48:44 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:23.276 06:48:44 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:23.276 06:48:44 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.xpoUzm/tests/target /tmp/spdk.xpoUzm 00:08:23.276 06:48:44 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:23.276 06:48:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:23.276 06:48:44 -- common/autotest_common.sh@328 -- # df -T 00:08:23.277 06:48:44 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:08:23.277 06:48:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:23.277 06:48:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=422735872 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:08:23.277 06:48:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=4861693952 00:08:23.277 06:48:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=55185387520 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61730590720 00:08:23.277 06:48:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=6545203200 00:08:23.277 06:48:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=30864035840 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865293312 00:08:23.277 06:48:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:23.277 06:48:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=12336680960 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12346118144 00:08:23.277 06:48:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=9437184 00:08:23.277 06:48:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=30865080320 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865297408 00:08:23.277 06:48:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=217088 00:08:23.277 06:48:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:23.277 06:48:44 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # avails["$mount"]=6173044736 00:08:23.277 06:48:44 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6173057024 00:08:23.277 06:48:44 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:23.277 06:48:44 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:23.277 06:48:44 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:23.277 * Looking for test storage... 00:08:23.277 06:48:44 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:23.277 06:48:44 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:23.277 06:48:44 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:23.277 06:48:44 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:23.277 06:48:44 -- common/autotest_common.sh@373 -- # mount=/ 00:08:23.277 06:48:44 -- common/autotest_common.sh@375 -- # target_space=55185387520 00:08:23.277 06:48:44 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:23.277 06:48:44 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:23.277 06:48:44 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:08:23.277 06:48:44 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:08:23.277 06:48:44 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:08:23.277 06:48:44 -- common/autotest_common.sh@382 -- # new_size=8759795712 00:08:23.277 06:48:44 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:23.277 06:48:44 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:23.277 06:48:44 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:23.277 06:48:44 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:23.277 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:23.277 06:48:44 -- common/autotest_common.sh@390 -- # return 0 00:08:23.277 06:48:44 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:23.277 06:48:44 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:23.277 06:48:44 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:23.277 06:48:44 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:23.277 06:48:44 -- common/autotest_common.sh@1682 -- # true 00:08:23.277 06:48:44 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:23.277 06:48:44 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:23.277 06:48:44 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:23.277 06:48:44 -- common/autotest_common.sh@27 -- # exec 00:08:23.277 06:48:44 -- common/autotest_common.sh@29 -- # exec 00:08:23.277 06:48:44 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:23.277 06:48:44 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:23.277 06:48:44 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:23.277 06:48:44 -- common/autotest_common.sh@18 -- # set -x 00:08:23.277 06:48:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:23.277 06:48:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:23.277 06:48:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:23.277 06:48:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:23.277 06:48:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:23.277 06:48:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:23.277 06:48:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:23.277 06:48:44 -- scripts/common.sh@335 -- # IFS=.-: 00:08:23.277 06:48:44 -- scripts/common.sh@335 -- # read -ra ver1 00:08:23.277 06:48:44 -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.277 06:48:44 -- scripts/common.sh@336 -- # read -ra ver2 00:08:23.277 06:48:44 -- scripts/common.sh@337 -- # local 'op=<' 00:08:23.277 06:48:44 -- scripts/common.sh@339 -- # ver1_l=2 00:08:23.277 06:48:44 -- scripts/common.sh@340 -- # ver2_l=1 00:08:23.277 06:48:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:23.277 06:48:44 -- scripts/common.sh@343 -- # case "$op" in 00:08:23.277 06:48:44 -- scripts/common.sh@344 -- # : 1 00:08:23.277 06:48:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:23.277 06:48:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.277 06:48:44 -- scripts/common.sh@364 -- # decimal 1 00:08:23.277 06:48:44 -- scripts/common.sh@352 -- # local d=1 00:08:23.277 06:48:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.277 06:48:44 -- scripts/common.sh@354 -- # echo 1 00:08:23.277 06:48:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:23.277 06:48:44 -- scripts/common.sh@365 -- # decimal 2 00:08:23.277 06:48:44 -- scripts/common.sh@352 -- # local d=2 00:08:23.277 06:48:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.277 06:48:44 -- scripts/common.sh@354 -- # echo 2 00:08:23.277 06:48:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:23.277 06:48:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:23.277 06:48:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:23.277 06:48:44 -- scripts/common.sh@367 -- # return 0 00:08:23.277 06:48:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.277 06:48:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:23.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.277 --rc genhtml_branch_coverage=1 00:08:23.277 --rc genhtml_function_coverage=1 00:08:23.277 --rc genhtml_legend=1 00:08:23.277 --rc geninfo_all_blocks=1 00:08:23.277 --rc geninfo_unexecuted_blocks=1 00:08:23.277 00:08:23.277 ' 00:08:23.277 06:48:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:23.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.277 --rc genhtml_branch_coverage=1 00:08:23.277 --rc genhtml_function_coverage=1 00:08:23.277 --rc genhtml_legend=1 00:08:23.277 --rc geninfo_all_blocks=1 00:08:23.277 --rc geninfo_unexecuted_blocks=1 00:08:23.277 00:08:23.277 ' 00:08:23.277 06:48:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:23.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.277 --rc genhtml_branch_coverage=1 00:08:23.277 --rc genhtml_function_coverage=1 00:08:23.278 --rc genhtml_legend=1 00:08:23.278 --rc geninfo_all_blocks=1 00:08:23.278 --rc geninfo_unexecuted_blocks=1 00:08:23.278 00:08:23.278 ' 00:08:23.278 06:48:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:23.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.278 --rc genhtml_branch_coverage=1 00:08:23.278 --rc genhtml_function_coverage=1 00:08:23.278 --rc genhtml_legend=1 00:08:23.278 --rc geninfo_all_blocks=1 00:08:23.278 --rc geninfo_unexecuted_blocks=1 00:08:23.278 00:08:23.278 ' 00:08:23.278 06:48:44 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.278 06:48:44 -- nvmf/common.sh@7 -- # uname -s 00:08:23.278 06:48:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.278 06:48:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.278 06:48:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.278 06:48:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.278 06:48:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.278 06:48:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.278 06:48:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.278 06:48:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.278 06:48:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.278 06:48:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.278 06:48:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:23.278 06:48:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:23.278 06:48:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.278 06:48:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.278 06:48:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.278 06:48:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:23.278 06:48:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.278 06:48:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.278 06:48:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.278 06:48:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.278 06:48:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.278 06:48:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.278 06:48:44 -- paths/export.sh@5 -- # export PATH 00:08:23.278 06:48:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.278 06:48:44 -- nvmf/common.sh@46 -- # : 0 00:08:23.278 06:48:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:23.278 06:48:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:23.278 06:48:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:23.278 06:48:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.278 06:48:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.278 06:48:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:23.278 06:48:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:23.278 06:48:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:23.278 06:48:44 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:23.278 06:48:44 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:23.278 06:48:44 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:23.278 06:48:44 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:23.278 06:48:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.278 06:48:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:23.278 06:48:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:23.278 06:48:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:23.278 06:48:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.278 06:48:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.278 06:48:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.278 06:48:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:23.278 06:48:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:23.278 06:48:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:23.278 06:48:44 -- common/autotest_common.sh@10 -- # set +x 00:08:29.945 06:48:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:29.945 06:48:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:29.945 06:48:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:29.945 06:48:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:29.945 06:48:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:29.945 06:48:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:29.945 06:48:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:29.945 06:48:51 -- nvmf/common.sh@294 -- # net_devs=() 00:08:29.945 06:48:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:29.945 06:48:51 -- nvmf/common.sh@295 -- # e810=() 00:08:29.945 06:48:51 -- nvmf/common.sh@295 -- # local -ga e810 00:08:29.945 06:48:51 -- nvmf/common.sh@296 -- # x722=() 00:08:29.945 06:48:51 -- nvmf/common.sh@296 -- # local -ga x722 00:08:29.945 06:48:51 -- nvmf/common.sh@297 -- # mlx=() 00:08:29.945 06:48:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:29.945 06:48:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.945 06:48:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.945 06:48:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.945 06:48:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.945 06:48:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.945 06:48:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.945 06:48:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.945 06:48:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.945 06:48:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.945 06:48:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.945 06:48:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.945 06:48:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:29.945 06:48:51 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:29.945 06:48:51 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:29.945 06:48:51 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:29.945 06:48:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:29.945 06:48:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:29.945 06:48:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:29.945 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:29.945 06:48:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:29.945 06:48:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:29.945 06:48:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:29.945 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:29.945 06:48:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:29.945 06:48:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:29.945 06:48:51 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:29.945 06:48:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.945 06:48:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:29.945 06:48:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.945 06:48:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:29.945 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:29.945 06:48:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.945 06:48:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:29.945 06:48:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.945 06:48:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:29.945 06:48:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.945 06:48:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:29.945 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:29.945 06:48:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.945 06:48:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:29.945 06:48:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:29.945 06:48:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:29.945 06:48:51 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:29.945 06:48:51 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:29.945 06:48:51 -- nvmf/common.sh@57 -- # uname 00:08:29.945 06:48:51 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:29.945 06:48:51 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:29.945 06:48:51 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:29.945 06:48:51 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:29.945 06:48:51 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:29.945 06:48:51 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:29.945 06:48:51 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:29.945 06:48:51 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:29.945 06:48:51 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:29.945 06:48:51 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:29.945 06:48:51 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:29.945 06:48:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:29.945 06:48:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:29.945 06:48:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:29.946 06:48:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:29.946 06:48:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:29.946 06:48:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:29.946 06:48:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.946 06:48:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:29.946 06:48:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:29.946 06:48:51 -- nvmf/common.sh@104 -- # continue 2 00:08:29.946 06:48:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:29.946 06:48:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.946 06:48:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:29.946 06:48:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.946 06:48:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:29.946 06:48:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:29.946 06:48:51 -- nvmf/common.sh@104 -- # continue 2 00:08:29.946 06:48:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:29.946 06:48:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:29.946 06:48:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:29.946 06:48:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:29.946 06:48:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:29.946 06:48:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:29.946 06:48:51 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:29.946 06:48:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:29.946 06:48:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:29.946 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:29.946 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:29.946 altname enp217s0f0np0 00:08:29.946 altname ens818f0np0 00:08:29.946 inet 192.168.100.8/24 scope global mlx_0_0 00:08:29.946 valid_lft forever preferred_lft forever 00:08:29.946 06:48:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:29.946 06:48:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:29.946 06:48:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:29.946 06:48:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:29.946 06:48:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:29.946 06:48:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:29.946 06:48:51 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:29.946 06:48:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:29.946 06:48:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:29.946 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:29.946 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:29.946 altname enp217s0f1np1 00:08:29.946 altname ens818f1np1 00:08:29.946 inet 192.168.100.9/24 scope global mlx_0_1 00:08:29.946 valid_lft forever preferred_lft forever 00:08:29.946 06:48:51 -- nvmf/common.sh@410 -- # return 0 00:08:29.946 06:48:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:29.946 06:48:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:29.946 06:48:51 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:29.946 06:48:51 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:29.946 06:48:51 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:29.946 06:48:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:29.946 06:48:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:29.946 06:48:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:29.946 06:48:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:29.946 06:48:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:29.946 06:48:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:29.946 06:48:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.946 06:48:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:29.946 06:48:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:29.946 06:48:51 -- nvmf/common.sh@104 -- # continue 2 00:08:29.946 06:48:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:29.946 06:48:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.946 06:48:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:29.946 06:48:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.946 06:48:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:29.946 06:48:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:29.946 06:48:51 -- nvmf/common.sh@104 -- # continue 2 00:08:29.946 06:48:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:29.946 06:48:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:29.946 06:48:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:29.946 06:48:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:29.946 06:48:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:29.946 06:48:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:30.205 06:48:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:30.205 06:48:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:30.205 06:48:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:30.205 06:48:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:30.206 06:48:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:30.206 06:48:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:30.206 06:48:51 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:30.206 192.168.100.9' 00:08:30.206 06:48:51 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:30.206 192.168.100.9' 00:08:30.206 06:48:51 -- nvmf/common.sh@445 -- # head -n 1 00:08:30.206 06:48:51 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:30.206 06:48:51 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:30.206 192.168.100.9' 00:08:30.206 06:48:51 -- nvmf/common.sh@446 -- # tail -n +2 00:08:30.206 06:48:51 -- nvmf/common.sh@446 -- # head -n 1 00:08:30.206 06:48:51 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:30.206 06:48:51 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:30.206 06:48:51 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:30.206 06:48:51 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:30.206 06:48:51 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:30.206 06:48:51 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:30.206 06:48:51 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:30.206 06:48:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:30.206 06:48:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.206 06:48:51 -- common/autotest_common.sh@10 -- # set +x 00:08:30.206 ************************************ 00:08:30.206 START TEST nvmf_filesystem_no_in_capsule 00:08:30.206 ************************************ 00:08:30.206 06:48:51 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:30.206 06:48:51 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:30.206 06:48:51 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:30.206 06:48:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:30.206 06:48:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.206 06:48:51 -- common/autotest_common.sh@10 -- # set +x 00:08:30.206 06:48:51 -- nvmf/common.sh@469 -- # nvmfpid=1214808 00:08:30.206 06:48:51 -- nvmf/common.sh@470 -- # waitforlisten 1214808 00:08:30.206 06:48:51 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.206 06:48:51 -- common/autotest_common.sh@829 -- # '[' -z 1214808 ']' 00:08:30.206 06:48:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.206 06:48:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.206 06:48:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.206 06:48:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.206 06:48:51 -- common/autotest_common.sh@10 -- # set +x 00:08:30.206 [2024-12-15 06:48:51.716252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:30.206 [2024-12-15 06:48:51.716304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.206 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.206 [2024-12-15 06:48:51.787817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.206 [2024-12-15 06:48:51.828146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.206 [2024-12-15 06:48:51.828257] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.206 [2024-12-15 06:48:51.828268] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.206 [2024-12-15 06:48:51.828277] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.206 [2024-12-15 06:48:51.828370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.206 [2024-12-15 06:48:51.828468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.206 [2024-12-15 06:48:51.828531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.206 [2024-12-15 06:48:51.828533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.142 06:48:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.142 06:48:52 -- common/autotest_common.sh@862 -- # return 0 00:08:31.142 06:48:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:31.142 06:48:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.142 06:48:52 -- common/autotest_common.sh@10 -- # set +x 00:08:31.142 06:48:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.142 06:48:52 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:31.142 06:48:52 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:31.142 06:48:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.142 06:48:52 -- common/autotest_common.sh@10 -- # set +x 00:08:31.142 [2024-12-15 06:48:52.583462] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:31.142 [2024-12-15 06:48:52.604557] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18700f0/0x18745c0) succeed. 00:08:31.142 [2024-12-15 06:48:52.613680] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1871690/0x18b5c60) succeed. 00:08:31.142 06:48:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.142 06:48:52 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:31.142 06:48:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.142 06:48:52 -- common/autotest_common.sh@10 -- # set +x 00:08:31.401 Malloc1 00:08:31.401 06:48:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.401 06:48:52 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:31.401 06:48:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.401 06:48:52 -- common/autotest_common.sh@10 -- # set +x 00:08:31.401 06:48:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.401 06:48:52 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:31.401 06:48:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.401 06:48:52 -- common/autotest_common.sh@10 -- # set +x 00:08:31.401 06:48:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.401 06:48:52 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:31.401 06:48:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.401 06:48:52 -- common/autotest_common.sh@10 -- # set +x 00:08:31.401 [2024-12-15 06:48:52.858809] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:31.401 06:48:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.401 06:48:52 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:31.401 06:48:52 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:31.401 06:48:52 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:31.401 06:48:52 -- common/autotest_common.sh@1369 -- # local bs 00:08:31.401 06:48:52 -- common/autotest_common.sh@1370 -- # local nb 00:08:31.401 06:48:52 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:31.401 06:48:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.401 06:48:52 -- common/autotest_common.sh@10 -- # set +x 00:08:31.401 06:48:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.401 06:48:52 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:31.401 { 00:08:31.401 "name": "Malloc1", 00:08:31.401 "aliases": [ 00:08:31.401 "af6ac18d-ebbb-4bbf-a981-215eb7eed233" 00:08:31.401 ], 00:08:31.401 "product_name": "Malloc disk", 00:08:31.401 "block_size": 512, 00:08:31.401 "num_blocks": 1048576, 00:08:31.401 "uuid": "af6ac18d-ebbb-4bbf-a981-215eb7eed233", 00:08:31.401 "assigned_rate_limits": { 00:08:31.401 "rw_ios_per_sec": 0, 00:08:31.401 "rw_mbytes_per_sec": 0, 00:08:31.401 "r_mbytes_per_sec": 0, 00:08:31.401 "w_mbytes_per_sec": 0 00:08:31.401 }, 00:08:31.401 "claimed": true, 00:08:31.401 "claim_type": "exclusive_write", 00:08:31.401 "zoned": false, 00:08:31.401 "supported_io_types": { 00:08:31.402 "read": true, 00:08:31.402 "write": true, 00:08:31.402 "unmap": true, 00:08:31.402 "write_zeroes": true, 00:08:31.402 "flush": true, 00:08:31.402 "reset": true, 00:08:31.402 "compare": false, 00:08:31.402 "compare_and_write": false, 00:08:31.402 "abort": true, 00:08:31.402 "nvme_admin": false, 00:08:31.402 "nvme_io": false 00:08:31.402 }, 00:08:31.402 "memory_domains": [ 00:08:31.402 { 00:08:31.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.402 "dma_device_type": 2 00:08:31.402 } 00:08:31.402 ], 00:08:31.402 "driver_specific": {} 00:08:31.402 } 00:08:31.402 ]' 00:08:31.402 06:48:52 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:31.402 06:48:52 -- common/autotest_common.sh@1372 -- # bs=512 00:08:31.402 06:48:52 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:31.402 06:48:52 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:31.402 06:48:52 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:31.402 06:48:52 -- common/autotest_common.sh@1377 -- # echo 512 00:08:31.402 06:48:52 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:31.402 06:48:52 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:32.778 06:48:53 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.778 06:48:53 -- common/autotest_common.sh@1187 -- # local i=0 00:08:32.778 06:48:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.778 06:48:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:32.778 06:48:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:34.681 06:48:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:34.681 06:48:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:34.681 06:48:55 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.681 06:48:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:34.681 06:48:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.681 06:48:56 -- common/autotest_common.sh@1197 -- # return 0 00:08:34.681 06:48:56 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:34.681 06:48:56 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:34.681 06:48:56 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:34.681 06:48:56 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:34.681 06:48:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:34.681 06:48:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:34.681 06:48:56 -- setup/common.sh@80 -- # echo 536870912 00:08:34.681 06:48:56 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:34.681 06:48:56 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:34.681 06:48:56 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:34.681 06:48:56 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:34.681 06:48:56 -- target/filesystem.sh@69 -- # partprobe 00:08:34.681 06:48:56 -- target/filesystem.sh@70 -- # sleep 1 00:08:35.616 06:48:57 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:35.616 06:48:57 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:35.616 06:48:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:35.616 06:48:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.616 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:08:35.616 ************************************ 00:08:35.616 START TEST filesystem_ext4 00:08:35.616 ************************************ 00:08:35.616 06:48:57 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:35.616 06:48:57 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:35.616 06:48:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:35.616 06:48:57 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:35.616 06:48:57 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:35.616 06:48:57 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:35.616 06:48:57 -- common/autotest_common.sh@914 -- # local i=0 00:08:35.616 06:48:57 -- common/autotest_common.sh@915 -- # local force 00:08:35.616 06:48:57 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:35.616 06:48:57 -- common/autotest_common.sh@918 -- # force=-F 00:08:35.616 06:48:57 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:35.616 mke2fs 1.47.0 (5-Feb-2023) 00:08:35.875 Discarding device blocks: 0/522240 done 00:08:35.875 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:35.875 Filesystem UUID: be55dd29-51d2-420e-93c9-5c684ff86a80 00:08:35.875 Superblock backups stored on blocks: 00:08:35.875 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:35.875 00:08:35.875 Allocating group tables: 0/64 done 00:08:35.875 Writing inode tables: 0/64 done 00:08:35.875 Creating journal (8192 blocks): done 00:08:35.875 Writing superblocks and filesystem accounting information: 0/64 done 00:08:35.875 00:08:35.875 06:48:57 -- common/autotest_common.sh@931 -- # return 0 00:08:35.875 06:48:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:35.875 06:48:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:35.875 06:48:57 -- target/filesystem.sh@25 -- # sync 00:08:35.875 06:48:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:35.875 06:48:57 -- target/filesystem.sh@27 -- # sync 00:08:35.875 06:48:57 -- target/filesystem.sh@29 -- # i=0 00:08:35.875 06:48:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:35.875 06:48:57 -- target/filesystem.sh@37 -- # kill -0 1214808 00:08:35.875 06:48:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:35.875 06:48:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:35.875 06:48:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:35.875 06:48:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:35.875 00:08:35.875 real 0m0.194s 00:08:35.875 user 0m0.036s 00:08:35.875 sys 0m0.063s 00:08:35.875 06:48:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.875 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:08:35.875 ************************************ 00:08:35.875 END TEST filesystem_ext4 00:08:35.875 ************************************ 00:08:35.875 06:48:57 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:35.875 06:48:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:35.875 06:48:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.875 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:08:35.875 ************************************ 00:08:35.875 START TEST filesystem_btrfs 00:08:35.875 ************************************ 00:08:35.875 06:48:57 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:35.875 06:48:57 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:35.875 06:48:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:35.875 06:48:57 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:35.875 06:48:57 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:35.875 06:48:57 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:35.875 06:48:57 -- common/autotest_common.sh@914 -- # local i=0 00:08:35.875 06:48:57 -- common/autotest_common.sh@915 -- # local force 00:08:35.875 06:48:57 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:35.875 06:48:57 -- common/autotest_common.sh@920 -- # force=-f 00:08:35.875 06:48:57 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:36.134 btrfs-progs v6.8.1 00:08:36.134 See https://btrfs.readthedocs.io for more information. 00:08:36.134 00:08:36.134 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:36.134 NOTE: several default settings have changed in version 5.15, please make sure 00:08:36.134 this does not affect your deployments: 00:08:36.134 - DUP for metadata (-m dup) 00:08:36.134 - enabled no-holes (-O no-holes) 00:08:36.134 - enabled free-space-tree (-R free-space-tree) 00:08:36.134 00:08:36.134 Label: (null) 00:08:36.134 UUID: de8db7f8-1a63-45aa-9e55-8c8160e3878b 00:08:36.134 Node size: 16384 00:08:36.134 Sector size: 4096 (CPU page size: 4096) 00:08:36.134 Filesystem size: 510.00MiB 00:08:36.134 Block group profiles: 00:08:36.134 Data: single 8.00MiB 00:08:36.134 Metadata: DUP 32.00MiB 00:08:36.134 System: DUP 8.00MiB 00:08:36.134 SSD detected: yes 00:08:36.134 Zoned device: no 00:08:36.134 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:36.134 Checksum: crc32c 00:08:36.134 Number of devices: 1 00:08:36.134 Devices: 00:08:36.134 ID SIZE PATH 00:08:36.134 1 510.00MiB /dev/nvme0n1p1 00:08:36.134 00:08:36.135 06:48:57 -- common/autotest_common.sh@931 -- # return 0 00:08:36.135 06:48:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:36.135 06:48:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:36.135 06:48:57 -- target/filesystem.sh@25 -- # sync 00:08:36.135 06:48:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:36.135 06:48:57 -- target/filesystem.sh@27 -- # sync 00:08:36.135 06:48:57 -- target/filesystem.sh@29 -- # i=0 00:08:36.135 06:48:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:36.135 06:48:57 -- target/filesystem.sh@37 -- # kill -0 1214808 00:08:36.135 06:48:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:36.135 06:48:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:36.135 06:48:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:36.135 06:48:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:36.135 00:08:36.135 real 0m0.250s 00:08:36.135 user 0m0.038s 00:08:36.135 sys 0m0.115s 00:08:36.135 06:48:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.135 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:08:36.135 ************************************ 00:08:36.135 END TEST filesystem_btrfs 00:08:36.135 ************************************ 00:08:36.135 06:48:57 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:36.135 06:48:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:36.135 06:48:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.135 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:08:36.135 ************************************ 00:08:36.135 START TEST filesystem_xfs 00:08:36.135 ************************************ 00:08:36.135 06:48:57 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:36.135 06:48:57 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:36.135 06:48:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:36.135 06:48:57 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:36.135 06:48:57 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:36.135 06:48:57 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:36.135 06:48:57 -- common/autotest_common.sh@914 -- # local i=0 00:08:36.135 06:48:57 -- common/autotest_common.sh@915 -- # local force 00:08:36.135 06:48:57 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:36.135 06:48:57 -- common/autotest_common.sh@920 -- # force=-f 00:08:36.135 06:48:57 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:36.393 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:36.393 = sectsz=512 attr=2, projid32bit=1 00:08:36.393 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:36.393 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:36.393 data = bsize=4096 blocks=130560, imaxpct=25 00:08:36.393 = sunit=0 swidth=0 blks 00:08:36.393 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:36.393 log =internal log bsize=4096 blocks=16384, version=2 00:08:36.393 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:36.393 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:36.393 Discarding blocks...Done. 00:08:36.393 06:48:57 -- common/autotest_common.sh@931 -- # return 0 00:08:36.393 06:48:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:36.393 06:48:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:36.393 06:48:57 -- target/filesystem.sh@25 -- # sync 00:08:36.393 06:48:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:36.393 06:48:57 -- target/filesystem.sh@27 -- # sync 00:08:36.394 06:48:57 -- target/filesystem.sh@29 -- # i=0 00:08:36.394 06:48:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:36.394 06:48:57 -- target/filesystem.sh@37 -- # kill -0 1214808 00:08:36.394 06:48:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:36.394 06:48:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:36.394 06:48:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:36.394 06:48:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:36.394 00:08:36.394 real 0m0.204s 00:08:36.394 user 0m0.030s 00:08:36.394 sys 0m0.080s 00:08:36.394 06:48:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.394 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:08:36.394 ************************************ 00:08:36.394 END TEST filesystem_xfs 00:08:36.394 ************************************ 00:08:36.394 06:48:57 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:36.394 06:48:58 -- target/filesystem.sh@93 -- # sync 00:08:36.394 06:48:58 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.590 06:48:58 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.590 06:48:58 -- common/autotest_common.sh@1208 -- # local i=0 00:08:37.590 06:48:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:37.590 06:48:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.590 06:48:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:37.590 06:48:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.590 06:48:59 -- common/autotest_common.sh@1220 -- # return 0 00:08:37.590 06:48:59 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.590 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.590 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:37.590 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.590 06:48:59 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:37.590 06:48:59 -- target/filesystem.sh@101 -- # killprocess 1214808 00:08:37.590 06:48:59 -- common/autotest_common.sh@936 -- # '[' -z 1214808 ']' 00:08:37.590 06:48:59 -- common/autotest_common.sh@940 -- # kill -0 1214808 00:08:37.590 06:48:59 -- common/autotest_common.sh@941 -- # uname 00:08:37.590 06:48:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:37.590 06:48:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1214808 00:08:37.590 06:48:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:37.590 06:48:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:37.590 06:48:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1214808' 00:08:37.590 killing process with pid 1214808 00:08:37.590 06:48:59 -- common/autotest_common.sh@955 -- # kill 1214808 00:08:37.590 06:48:59 -- common/autotest_common.sh@960 -- # wait 1214808 00:08:37.849 06:48:59 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:37.849 00:08:37.849 real 0m7.799s 00:08:37.849 user 0m30.502s 00:08:37.849 sys 0m1.151s 00:08:37.849 06:48:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.849 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:37.849 ************************************ 00:08:37.849 END TEST nvmf_filesystem_no_in_capsule 00:08:37.849 ************************************ 00:08:38.108 06:48:59 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:38.108 06:48:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:38.108 06:48:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.108 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.108 ************************************ 00:08:38.108 START TEST nvmf_filesystem_in_capsule 00:08:38.108 ************************************ 00:08:38.108 06:48:59 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:38.108 06:48:59 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:38.108 06:48:59 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:38.108 06:48:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:38.108 06:48:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.108 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.108 06:48:59 -- nvmf/common.sh@469 -- # nvmfpid=1216377 00:08:38.108 06:48:59 -- nvmf/common.sh@470 -- # waitforlisten 1216377 00:08:38.108 06:48:59 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.108 06:48:59 -- common/autotest_common.sh@829 -- # '[' -z 1216377 ']' 00:08:38.108 06:48:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.108 06:48:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.108 06:48:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.108 06:48:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.108 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:38.108 [2024-12-15 06:48:59.566679] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:38.108 [2024-12-15 06:48:59.566732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.108 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.108 [2024-12-15 06:48:59.636191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.108 [2024-12-15 06:48:59.673169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.108 [2024-12-15 06:48:59.673280] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.108 [2024-12-15 06:48:59.673290] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.108 [2024-12-15 06:48:59.673299] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.108 [2024-12-15 06:48:59.673349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.108 [2024-12-15 06:48:59.673447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.108 [2024-12-15 06:48:59.673530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.108 [2024-12-15 06:48:59.673532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.043 06:49:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.043 06:49:00 -- common/autotest_common.sh@862 -- # return 0 00:08:39.043 06:49:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:39.043 06:49:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.043 06:49:00 -- common/autotest_common.sh@10 -- # set +x 00:08:39.043 06:49:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.043 06:49:00 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:39.043 06:49:00 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:39.043 06:49:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.043 06:49:00 -- common/autotest_common.sh@10 -- # set +x 00:08:39.043 [2024-12-15 06:49:00.462367] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1123f30/0x1128400) succeed. 00:08:39.043 [2024-12-15 06:49:00.471457] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11254d0/0x1169aa0) succeed. 00:08:39.043 06:49:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.043 06:49:00 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:39.043 06:49:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.043 06:49:00 -- common/autotest_common.sh@10 -- # set +x 00:08:39.303 Malloc1 00:08:39.303 06:49:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.303 06:49:00 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:39.303 06:49:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.303 06:49:00 -- common/autotest_common.sh@10 -- # set +x 00:08:39.303 06:49:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.303 06:49:00 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:39.303 06:49:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.303 06:49:00 -- common/autotest_common.sh@10 -- # set +x 00:08:39.303 06:49:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.303 06:49:00 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:39.303 06:49:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.303 06:49:00 -- common/autotest_common.sh@10 -- # set +x 00:08:39.303 [2024-12-15 06:49:00.737197] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:39.303 06:49:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.303 06:49:00 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:39.303 06:49:00 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:39.303 06:49:00 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:39.303 06:49:00 -- common/autotest_common.sh@1369 -- # local bs 00:08:39.303 06:49:00 -- common/autotest_common.sh@1370 -- # local nb 00:08:39.303 06:49:00 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:39.303 06:49:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.303 06:49:00 -- common/autotest_common.sh@10 -- # set +x 00:08:39.303 06:49:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.303 06:49:00 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:39.303 { 00:08:39.303 "name": "Malloc1", 00:08:39.303 "aliases": [ 00:08:39.303 "ff53e240-c6e5-4e4f-a4ef-9d66b6d33cea" 00:08:39.303 ], 00:08:39.303 "product_name": "Malloc disk", 00:08:39.303 "block_size": 512, 00:08:39.303 "num_blocks": 1048576, 00:08:39.303 "uuid": "ff53e240-c6e5-4e4f-a4ef-9d66b6d33cea", 00:08:39.303 "assigned_rate_limits": { 00:08:39.303 "rw_ios_per_sec": 0, 00:08:39.303 "rw_mbytes_per_sec": 0, 00:08:39.303 "r_mbytes_per_sec": 0, 00:08:39.303 "w_mbytes_per_sec": 0 00:08:39.303 }, 00:08:39.303 "claimed": true, 00:08:39.303 "claim_type": "exclusive_write", 00:08:39.303 "zoned": false, 00:08:39.303 "supported_io_types": { 00:08:39.303 "read": true, 00:08:39.303 "write": true, 00:08:39.303 "unmap": true, 00:08:39.303 "write_zeroes": true, 00:08:39.303 "flush": true, 00:08:39.303 "reset": true, 00:08:39.303 "compare": false, 00:08:39.303 "compare_and_write": false, 00:08:39.303 "abort": true, 00:08:39.303 "nvme_admin": false, 00:08:39.303 "nvme_io": false 00:08:39.303 }, 00:08:39.303 "memory_domains": [ 00:08:39.303 { 00:08:39.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.303 "dma_device_type": 2 00:08:39.303 } 00:08:39.303 ], 00:08:39.303 "driver_specific": {} 00:08:39.303 } 00:08:39.303 ]' 00:08:39.303 06:49:00 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:39.303 06:49:00 -- common/autotest_common.sh@1372 -- # bs=512 00:08:39.303 06:49:00 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:39.303 06:49:00 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:39.303 06:49:00 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:39.303 06:49:00 -- common/autotest_common.sh@1377 -- # echo 512 00:08:39.303 06:49:00 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:39.303 06:49:00 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:40.239 06:49:01 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:40.239 06:49:01 -- common/autotest_common.sh@1187 -- # local i=0 00:08:40.239 06:49:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:40.239 06:49:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:40.239 06:49:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:42.771 06:49:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:42.771 06:49:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:42.771 06:49:03 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:42.771 06:49:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:42.771 06:49:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:42.771 06:49:03 -- common/autotest_common.sh@1197 -- # return 0 00:08:42.771 06:49:03 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:42.771 06:49:03 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:42.771 06:49:03 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:42.771 06:49:03 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:42.771 06:49:03 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:42.771 06:49:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:42.771 06:49:03 -- setup/common.sh@80 -- # echo 536870912 00:08:42.771 06:49:03 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:42.771 06:49:03 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:42.771 06:49:03 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:42.771 06:49:03 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:42.771 06:49:03 -- target/filesystem.sh@69 -- # partprobe 00:08:42.771 06:49:04 -- target/filesystem.sh@70 -- # sleep 1 00:08:43.708 06:49:05 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:43.708 06:49:05 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:43.708 06:49:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:43.708 06:49:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.708 06:49:05 -- common/autotest_common.sh@10 -- # set +x 00:08:43.708 ************************************ 00:08:43.708 START TEST filesystem_in_capsule_ext4 00:08:43.708 ************************************ 00:08:43.708 06:49:05 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:43.708 06:49:05 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:43.708 06:49:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:43.708 06:49:05 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:43.708 06:49:05 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:43.708 06:49:05 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:43.708 06:49:05 -- common/autotest_common.sh@914 -- # local i=0 00:08:43.708 06:49:05 -- common/autotest_common.sh@915 -- # local force 00:08:43.709 06:49:05 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:43.709 06:49:05 -- common/autotest_common.sh@918 -- # force=-F 00:08:43.709 06:49:05 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:43.709 mke2fs 1.47.0 (5-Feb-2023) 00:08:43.709 Discarding device blocks: 0/522240 done 00:08:43.709 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:43.709 Filesystem UUID: a2a862b8-95f6-4ccd-944d-234829bb0c2a 00:08:43.709 Superblock backups stored on blocks: 00:08:43.709 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:43.709 00:08:43.709 Allocating group tables: 0/64 done 00:08:43.709 Writing inode tables: 0/64 done 00:08:43.709 Creating journal (8192 blocks): done 00:08:43.709 Writing superblocks and filesystem accounting information: 0/64 done 00:08:43.709 00:08:43.709 06:49:05 -- common/autotest_common.sh@931 -- # return 0 00:08:43.709 06:49:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:43.709 06:49:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:43.709 06:49:05 -- target/filesystem.sh@25 -- # sync 00:08:43.709 06:49:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:43.709 06:49:05 -- target/filesystem.sh@27 -- # sync 00:08:43.709 06:49:05 -- target/filesystem.sh@29 -- # i=0 00:08:43.709 06:49:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:43.709 06:49:05 -- target/filesystem.sh@37 -- # kill -0 1216377 00:08:43.709 06:49:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:43.709 06:49:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:43.709 06:49:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:43.709 06:49:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:43.709 00:08:43.709 real 0m0.199s 00:08:43.709 user 0m0.026s 00:08:43.709 sys 0m0.081s 00:08:43.709 06:49:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.709 06:49:05 -- common/autotest_common.sh@10 -- # set +x 00:08:43.709 ************************************ 00:08:43.709 END TEST filesystem_in_capsule_ext4 00:08:43.709 ************************************ 00:08:43.709 06:49:05 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:43.709 06:49:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:43.709 06:49:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.709 06:49:05 -- common/autotest_common.sh@10 -- # set +x 00:08:43.709 ************************************ 00:08:43.709 START TEST filesystem_in_capsule_btrfs 00:08:43.709 ************************************ 00:08:43.709 06:49:05 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:43.709 06:49:05 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:43.709 06:49:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:43.709 06:49:05 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:43.709 06:49:05 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:43.709 06:49:05 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:43.709 06:49:05 -- common/autotest_common.sh@914 -- # local i=0 00:08:43.709 06:49:05 -- common/autotest_common.sh@915 -- # local force 00:08:43.709 06:49:05 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:43.709 06:49:05 -- common/autotest_common.sh@920 -- # force=-f 00:08:43.709 06:49:05 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:43.967 btrfs-progs v6.8.1 00:08:43.967 See https://btrfs.readthedocs.io for more information. 00:08:43.967 00:08:43.967 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:43.967 NOTE: several default settings have changed in version 5.15, please make sure 00:08:43.967 this does not affect your deployments: 00:08:43.967 - DUP for metadata (-m dup) 00:08:43.967 - enabled no-holes (-O no-holes) 00:08:43.967 - enabled free-space-tree (-R free-space-tree) 00:08:43.967 00:08:43.967 Label: (null) 00:08:43.967 UUID: fd398ceb-5c25-42ce-9d9f-df6c720c457e 00:08:43.967 Node size: 16384 00:08:43.967 Sector size: 4096 (CPU page size: 4096) 00:08:43.967 Filesystem size: 510.00MiB 00:08:43.967 Block group profiles: 00:08:43.967 Data: single 8.00MiB 00:08:43.967 Metadata: DUP 32.00MiB 00:08:43.967 System: DUP 8.00MiB 00:08:43.967 SSD detected: yes 00:08:43.967 Zoned device: no 00:08:43.967 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:43.967 Checksum: crc32c 00:08:43.967 Number of devices: 1 00:08:43.967 Devices: 00:08:43.967 ID SIZE PATH 00:08:43.967 1 510.00MiB /dev/nvme0n1p1 00:08:43.967 00:08:43.967 06:49:05 -- common/autotest_common.sh@931 -- # return 0 00:08:43.967 06:49:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:43.967 06:49:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:43.967 06:49:05 -- target/filesystem.sh@25 -- # sync 00:08:43.967 06:49:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:43.967 06:49:05 -- target/filesystem.sh@27 -- # sync 00:08:43.967 06:49:05 -- target/filesystem.sh@29 -- # i=0 00:08:43.967 06:49:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:43.967 06:49:05 -- target/filesystem.sh@37 -- # kill -0 1216377 00:08:43.967 06:49:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:43.967 06:49:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:43.967 06:49:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:43.967 06:49:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:43.967 00:08:43.967 real 0m0.249s 00:08:43.967 user 0m0.030s 00:08:43.967 sys 0m0.132s 00:08:43.967 06:49:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.967 06:49:05 -- common/autotest_common.sh@10 -- # set +x 00:08:43.967 ************************************ 00:08:43.967 END TEST filesystem_in_capsule_btrfs 00:08:43.967 ************************************ 00:08:43.967 06:49:05 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:43.967 06:49:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:43.967 06:49:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.967 06:49:05 -- common/autotest_common.sh@10 -- # set +x 00:08:43.967 ************************************ 00:08:43.967 START TEST filesystem_in_capsule_xfs 00:08:43.967 ************************************ 00:08:43.967 06:49:05 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:43.967 06:49:05 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:43.967 06:49:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:43.967 06:49:05 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:43.967 06:49:05 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:43.967 06:49:05 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:43.967 06:49:05 -- common/autotest_common.sh@914 -- # local i=0 00:08:43.967 06:49:05 -- common/autotest_common.sh@915 -- # local force 00:08:43.967 06:49:05 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:43.967 06:49:05 -- common/autotest_common.sh@920 -- # force=-f 00:08:43.967 06:49:05 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:44.226 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:44.226 = sectsz=512 attr=2, projid32bit=1 00:08:44.226 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:44.226 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:44.226 data = bsize=4096 blocks=130560, imaxpct=25 00:08:44.226 = sunit=0 swidth=0 blks 00:08:44.226 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:44.226 log =internal log bsize=4096 blocks=16384, version=2 00:08:44.226 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:44.226 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:44.226 Discarding blocks...Done. 00:08:44.226 06:49:05 -- common/autotest_common.sh@931 -- # return 0 00:08:44.226 06:49:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:44.226 06:49:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:44.226 06:49:05 -- target/filesystem.sh@25 -- # sync 00:08:44.226 06:49:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:44.226 06:49:05 -- target/filesystem.sh@27 -- # sync 00:08:44.226 06:49:05 -- target/filesystem.sh@29 -- # i=0 00:08:44.226 06:49:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:44.226 06:49:05 -- target/filesystem.sh@37 -- # kill -0 1216377 00:08:44.226 06:49:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:44.226 06:49:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:44.226 06:49:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:44.226 06:49:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:44.226 00:08:44.226 real 0m0.202s 00:08:44.226 user 0m0.028s 00:08:44.226 sys 0m0.078s 00:08:44.226 06:49:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:44.226 06:49:05 -- common/autotest_common.sh@10 -- # set +x 00:08:44.226 ************************************ 00:08:44.226 END TEST filesystem_in_capsule_xfs 00:08:44.226 ************************************ 00:08:44.226 06:49:05 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:44.226 06:49:05 -- target/filesystem.sh@93 -- # sync 00:08:44.226 06:49:05 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:45.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.602 06:49:06 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:45.602 06:49:06 -- common/autotest_common.sh@1208 -- # local i=0 00:08:45.602 06:49:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:45.602 06:49:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.602 06:49:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:45.602 06:49:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.602 06:49:06 -- common/autotest_common.sh@1220 -- # return 0 00:08:45.602 06:49:06 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.602 06:49:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.602 06:49:06 -- common/autotest_common.sh@10 -- # set +x 00:08:45.602 06:49:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.602 06:49:06 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:45.602 06:49:06 -- target/filesystem.sh@101 -- # killprocess 1216377 00:08:45.602 06:49:06 -- common/autotest_common.sh@936 -- # '[' -z 1216377 ']' 00:08:45.602 06:49:06 -- common/autotest_common.sh@940 -- # kill -0 1216377 00:08:45.602 06:49:06 -- common/autotest_common.sh@941 -- # uname 00:08:45.602 06:49:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.602 06:49:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1216377 00:08:45.602 06:49:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.602 06:49:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.602 06:49:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1216377' 00:08:45.602 killing process with pid 1216377 00:08:45.602 06:49:06 -- common/autotest_common.sh@955 -- # kill 1216377 00:08:45.602 06:49:06 -- common/autotest_common.sh@960 -- # wait 1216377 00:08:45.861 06:49:07 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:45.861 00:08:45.861 real 0m7.808s 00:08:45.861 user 0m30.521s 00:08:45.861 sys 0m1.193s 00:08:45.861 06:49:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.861 06:49:07 -- common/autotest_common.sh@10 -- # set +x 00:08:45.861 ************************************ 00:08:45.861 END TEST nvmf_filesystem_in_capsule 00:08:45.861 ************************************ 00:08:45.861 06:49:07 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:45.861 06:49:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:45.861 06:49:07 -- nvmf/common.sh@116 -- # sync 00:08:45.861 06:49:07 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:45.861 06:49:07 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:45.861 06:49:07 -- nvmf/common.sh@119 -- # set +e 00:08:45.861 06:49:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:45.861 06:49:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:45.861 rmmod nvme_rdma 00:08:45.861 rmmod nvme_fabrics 00:08:45.861 06:49:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:45.861 06:49:07 -- nvmf/common.sh@123 -- # set -e 00:08:45.861 06:49:07 -- nvmf/common.sh@124 -- # return 0 00:08:45.861 06:49:07 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:45.861 06:49:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:45.861 06:49:07 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:45.861 00:08:45.861 real 0m23.094s 00:08:45.861 user 1m3.320s 00:08:45.861 sys 0m7.777s 00:08:45.861 06:49:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.861 06:49:07 -- common/autotest_common.sh@10 -- # set +x 00:08:45.861 ************************************ 00:08:45.861 END TEST nvmf_filesystem 00:08:45.861 ************************************ 00:08:45.861 06:49:07 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:45.862 06:49:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:45.862 06:49:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.862 06:49:07 -- common/autotest_common.sh@10 -- # set +x 00:08:45.862 ************************************ 00:08:45.862 START TEST nvmf_discovery 00:08:45.862 ************************************ 00:08:45.862 06:49:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:46.121 * Looking for test storage... 00:08:46.121 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:46.121 06:49:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:46.121 06:49:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:46.121 06:49:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:46.121 06:49:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:46.121 06:49:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:46.121 06:49:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:46.121 06:49:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:46.121 06:49:07 -- scripts/common.sh@335 -- # IFS=.-: 00:08:46.121 06:49:07 -- scripts/common.sh@335 -- # read -ra ver1 00:08:46.121 06:49:07 -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.121 06:49:07 -- scripts/common.sh@336 -- # read -ra ver2 00:08:46.121 06:49:07 -- scripts/common.sh@337 -- # local 'op=<' 00:08:46.121 06:49:07 -- scripts/common.sh@339 -- # ver1_l=2 00:08:46.121 06:49:07 -- scripts/common.sh@340 -- # ver2_l=1 00:08:46.121 06:49:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:46.121 06:49:07 -- scripts/common.sh@343 -- # case "$op" in 00:08:46.121 06:49:07 -- scripts/common.sh@344 -- # : 1 00:08:46.121 06:49:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:46.121 06:49:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.121 06:49:07 -- scripts/common.sh@364 -- # decimal 1 00:08:46.121 06:49:07 -- scripts/common.sh@352 -- # local d=1 00:08:46.121 06:49:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.121 06:49:07 -- scripts/common.sh@354 -- # echo 1 00:08:46.121 06:49:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:46.121 06:49:07 -- scripts/common.sh@365 -- # decimal 2 00:08:46.121 06:49:07 -- scripts/common.sh@352 -- # local d=2 00:08:46.121 06:49:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.121 06:49:07 -- scripts/common.sh@354 -- # echo 2 00:08:46.121 06:49:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:46.121 06:49:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:46.121 06:49:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:46.121 06:49:07 -- scripts/common.sh@367 -- # return 0 00:08:46.121 06:49:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.121 06:49:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:46.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.121 --rc genhtml_branch_coverage=1 00:08:46.121 --rc genhtml_function_coverage=1 00:08:46.121 --rc genhtml_legend=1 00:08:46.121 --rc geninfo_all_blocks=1 00:08:46.121 --rc geninfo_unexecuted_blocks=1 00:08:46.121 00:08:46.121 ' 00:08:46.121 06:49:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:46.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.121 --rc genhtml_branch_coverage=1 00:08:46.121 --rc genhtml_function_coverage=1 00:08:46.121 --rc genhtml_legend=1 00:08:46.121 --rc geninfo_all_blocks=1 00:08:46.121 --rc geninfo_unexecuted_blocks=1 00:08:46.121 00:08:46.121 ' 00:08:46.121 06:49:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:46.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.121 --rc genhtml_branch_coverage=1 00:08:46.121 --rc genhtml_function_coverage=1 00:08:46.121 --rc genhtml_legend=1 00:08:46.121 --rc geninfo_all_blocks=1 00:08:46.121 --rc geninfo_unexecuted_blocks=1 00:08:46.121 00:08:46.121 ' 00:08:46.121 06:49:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:46.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.121 --rc genhtml_branch_coverage=1 00:08:46.121 --rc genhtml_function_coverage=1 00:08:46.121 --rc genhtml_legend=1 00:08:46.121 --rc geninfo_all_blocks=1 00:08:46.121 --rc geninfo_unexecuted_blocks=1 00:08:46.121 00:08:46.121 ' 00:08:46.121 06:49:07 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.121 06:49:07 -- nvmf/common.sh@7 -- # uname -s 00:08:46.121 06:49:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.121 06:49:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.121 06:49:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.121 06:49:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.121 06:49:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.121 06:49:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.121 06:49:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.121 06:49:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.121 06:49:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.121 06:49:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.121 06:49:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:46.121 06:49:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:46.121 06:49:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.121 06:49:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.121 06:49:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.121 06:49:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:46.121 06:49:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.121 06:49:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.121 06:49:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.121 06:49:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.122 06:49:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.122 06:49:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.122 06:49:07 -- paths/export.sh@5 -- # export PATH 00:08:46.122 06:49:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.122 06:49:07 -- nvmf/common.sh@46 -- # : 0 00:08:46.122 06:49:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:46.122 06:49:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:46.122 06:49:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:46.122 06:49:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.122 06:49:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.122 06:49:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:46.122 06:49:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:46.122 06:49:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:46.122 06:49:07 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:46.122 06:49:07 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:46.122 06:49:07 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:46.122 06:49:07 -- target/discovery.sh@15 -- # hash nvme 00:08:46.122 06:49:07 -- target/discovery.sh@20 -- # nvmftestinit 00:08:46.122 06:49:07 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:46.122 06:49:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.122 06:49:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:46.122 06:49:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:46.122 06:49:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:46.122 06:49:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.122 06:49:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.122 06:49:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.122 06:49:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:46.122 06:49:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:46.122 06:49:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:46.122 06:49:07 -- common/autotest_common.sh@10 -- # set +x 00:08:52.688 06:49:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:52.688 06:49:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:52.688 06:49:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:52.688 06:49:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:52.688 06:49:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:52.688 06:49:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:52.688 06:49:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:52.688 06:49:14 -- nvmf/common.sh@294 -- # net_devs=() 00:08:52.688 06:49:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:52.688 06:49:14 -- nvmf/common.sh@295 -- # e810=() 00:08:52.688 06:49:14 -- nvmf/common.sh@295 -- # local -ga e810 00:08:52.688 06:49:14 -- nvmf/common.sh@296 -- # x722=() 00:08:52.688 06:49:14 -- nvmf/common.sh@296 -- # local -ga x722 00:08:52.688 06:49:14 -- nvmf/common.sh@297 -- # mlx=() 00:08:52.688 06:49:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:52.688 06:49:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.688 06:49:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.688 06:49:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.688 06:49:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.688 06:49:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.688 06:49:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.688 06:49:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.688 06:49:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.688 06:49:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.688 06:49:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.688 06:49:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.688 06:49:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:52.688 06:49:14 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:52.688 06:49:14 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:52.688 06:49:14 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:52.688 06:49:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:52.688 06:49:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:52.688 06:49:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:52.688 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:52.688 06:49:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:52.688 06:49:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:52.688 06:49:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:52.688 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:52.688 06:49:14 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:52.688 06:49:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:52.688 06:49:14 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:52.688 06:49:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.688 06:49:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:52.688 06:49:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.688 06:49:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:52.688 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:52.688 06:49:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.688 06:49:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:52.688 06:49:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.688 06:49:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:52.688 06:49:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.688 06:49:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:52.688 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:52.688 06:49:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.688 06:49:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:52.688 06:49:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:52.688 06:49:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:52.688 06:49:14 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:52.688 06:49:14 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:52.688 06:49:14 -- nvmf/common.sh@57 -- # uname 00:08:52.688 06:49:14 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:52.688 06:49:14 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:52.688 06:49:14 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:52.688 06:49:14 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:52.688 06:49:14 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:52.688 06:49:14 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:52.688 06:49:14 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:52.948 06:49:14 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:52.948 06:49:14 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:52.948 06:49:14 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:52.948 06:49:14 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:52.948 06:49:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:52.948 06:49:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:52.948 06:49:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:52.948 06:49:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:52.948 06:49:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:52.948 06:49:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:52.948 06:49:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.948 06:49:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:52.948 06:49:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:52.948 06:49:14 -- nvmf/common.sh@104 -- # continue 2 00:08:52.948 06:49:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:52.948 06:49:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.948 06:49:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:52.948 06:49:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.948 06:49:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:52.948 06:49:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:52.948 06:49:14 -- nvmf/common.sh@104 -- # continue 2 00:08:52.948 06:49:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:52.948 06:49:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:52.948 06:49:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:52.948 06:49:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:52.948 06:49:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:52.948 06:49:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:52.948 06:49:14 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:52.948 06:49:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:52.948 06:49:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:52.948 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:52.948 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:52.948 altname enp217s0f0np0 00:08:52.948 altname ens818f0np0 00:08:52.948 inet 192.168.100.8/24 scope global mlx_0_0 00:08:52.948 valid_lft forever preferred_lft forever 00:08:52.948 06:49:14 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:52.948 06:49:14 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:52.948 06:49:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:52.948 06:49:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:52.948 06:49:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:52.948 06:49:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:52.948 06:49:14 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:52.948 06:49:14 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:52.948 06:49:14 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:52.948 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:52.948 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:52.948 altname enp217s0f1np1 00:08:52.948 altname ens818f1np1 00:08:52.948 inet 192.168.100.9/24 scope global mlx_0_1 00:08:52.948 valid_lft forever preferred_lft forever 00:08:52.948 06:49:14 -- nvmf/common.sh@410 -- # return 0 00:08:52.948 06:49:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:52.948 06:49:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:52.948 06:49:14 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:52.948 06:49:14 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:52.948 06:49:14 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:52.948 06:49:14 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:52.948 06:49:14 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:52.948 06:49:14 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:52.948 06:49:14 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:52.948 06:49:14 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:52.948 06:49:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:52.948 06:49:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.948 06:49:14 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:52.948 06:49:14 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:52.948 06:49:14 -- nvmf/common.sh@104 -- # continue 2 00:08:52.948 06:49:14 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:52.948 06:49:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.948 06:49:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:52.948 06:49:14 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.948 06:49:14 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:52.948 06:49:14 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:52.948 06:49:14 -- nvmf/common.sh@104 -- # continue 2 00:08:52.948 06:49:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:52.948 06:49:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:52.948 06:49:14 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:52.948 06:49:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:52.948 06:49:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:52.948 06:49:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:52.948 06:49:14 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:52.948 06:49:14 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:52.948 06:49:14 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:52.948 06:49:14 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:52.948 06:49:14 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:52.948 06:49:14 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:52.948 06:49:14 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:52.948 192.168.100.9' 00:08:52.948 06:49:14 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:52.948 192.168.100.9' 00:08:52.948 06:49:14 -- nvmf/common.sh@445 -- # head -n 1 00:08:52.948 06:49:14 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:52.948 06:49:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:52.948 192.168.100.9' 00:08:52.948 06:49:14 -- nvmf/common.sh@446 -- # tail -n +2 00:08:52.948 06:49:14 -- nvmf/common.sh@446 -- # head -n 1 00:08:52.948 06:49:14 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:52.948 06:49:14 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:52.948 06:49:14 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:52.948 06:49:14 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:52.948 06:49:14 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:52.948 06:49:14 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:52.948 06:49:14 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:52.948 06:49:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:52.948 06:49:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:52.948 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:08:52.948 06:49:14 -- nvmf/common.sh@469 -- # nvmfpid=1221142 00:08:52.948 06:49:14 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.948 06:49:14 -- nvmf/common.sh@470 -- # waitforlisten 1221142 00:08:52.948 06:49:14 -- common/autotest_common.sh@829 -- # '[' -z 1221142 ']' 00:08:52.948 06:49:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.948 06:49:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.948 06:49:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.948 06:49:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.948 06:49:14 -- common/autotest_common.sh@10 -- # set +x 00:08:52.948 [2024-12-15 06:49:14.580068] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:52.948 [2024-12-15 06:49:14.580120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.207 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.207 [2024-12-15 06:49:14.651063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.207 [2024-12-15 06:49:14.688853] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:53.207 [2024-12-15 06:49:14.688964] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.207 [2024-12-15 06:49:14.688980] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.207 [2024-12-15 06:49:14.688989] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.207 [2024-12-15 06:49:14.689033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.207 [2024-12-15 06:49:14.689060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.207 [2024-12-15 06:49:14.689151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.207 [2024-12-15 06:49:14.689153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.773 06:49:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.773 06:49:15 -- common/autotest_common.sh@862 -- # return 0 00:08:53.773 06:49:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:53.773 06:49:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:53.773 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.030 06:49:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.030 06:49:15 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:54.030 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.030 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.030 [2024-12-15 06:49:15.479824] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10150d0/0x10195a0) succeed. 00:08:54.030 [2024-12-15 06:49:15.488887] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1016670/0x105ac40) succeed. 00:08:54.030 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.030 06:49:15 -- target/discovery.sh@26 -- # seq 1 4 00:08:54.030 06:49:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:54.030 06:49:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:54.030 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.030 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.030 Null1 00:08:54.030 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.030 06:49:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:54.030 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.030 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.030 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.030 06:49:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:54.030 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.030 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.030 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.030 06:49:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:54.030 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.030 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.030 [2024-12-15 06:49:15.654323] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:54.030 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.030 06:49:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:54.030 06:49:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:54.030 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.030 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.030 Null2 00:08:54.030 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.030 06:49:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:54.030 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.031 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.288 06:49:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:54.288 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.288 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.288 06:49:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:54.288 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.288 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.288 06:49:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:54.288 06:49:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:54.288 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.288 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 Null3 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.288 06:49:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:54.288 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.288 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.288 06:49:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:54.288 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.288 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.288 06:49:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:54.288 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.288 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.288 06:49:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:54.288 06:49:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:54.288 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.288 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 Null4 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.288 06:49:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:54.288 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.288 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.288 06:49:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:54.288 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.288 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.288 06:49:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:54.288 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.288 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.288 06:49:15 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:54.288 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.288 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.288 06:49:15 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:54.288 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.288 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.288 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.289 06:49:15 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:08:54.289 00:08:54.289 Discovery Log Number of Records 6, Generation counter 6 00:08:54.289 =====Discovery Log Entry 0====== 00:08:54.289 trtype: rdma 00:08:54.289 adrfam: ipv4 00:08:54.289 subtype: current discovery subsystem 00:08:54.289 treq: not required 00:08:54.289 portid: 0 00:08:54.289 trsvcid: 4420 00:08:54.289 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:54.289 traddr: 192.168.100.8 00:08:54.289 eflags: explicit discovery connections, duplicate discovery information 00:08:54.289 rdma_prtype: not specified 00:08:54.289 rdma_qptype: connected 00:08:54.289 rdma_cms: rdma-cm 00:08:54.289 rdma_pkey: 0x0000 00:08:54.289 =====Discovery Log Entry 1====== 00:08:54.289 trtype: rdma 00:08:54.289 adrfam: ipv4 00:08:54.289 subtype: nvme subsystem 00:08:54.289 treq: not required 00:08:54.289 portid: 0 00:08:54.289 trsvcid: 4420 00:08:54.289 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:54.289 traddr: 192.168.100.8 00:08:54.289 eflags: none 00:08:54.289 rdma_prtype: not specified 00:08:54.289 rdma_qptype: connected 00:08:54.289 rdma_cms: rdma-cm 00:08:54.289 rdma_pkey: 0x0000 00:08:54.289 =====Discovery Log Entry 2====== 00:08:54.289 trtype: rdma 00:08:54.289 adrfam: ipv4 00:08:54.289 subtype: nvme subsystem 00:08:54.289 treq: not required 00:08:54.289 portid: 0 00:08:54.289 trsvcid: 4420 00:08:54.289 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:54.289 traddr: 192.168.100.8 00:08:54.289 eflags: none 00:08:54.289 rdma_prtype: not specified 00:08:54.289 rdma_qptype: connected 00:08:54.289 rdma_cms: rdma-cm 00:08:54.289 rdma_pkey: 0x0000 00:08:54.289 =====Discovery Log Entry 3====== 00:08:54.289 trtype: rdma 00:08:54.289 adrfam: ipv4 00:08:54.289 subtype: nvme subsystem 00:08:54.289 treq: not required 00:08:54.289 portid: 0 00:08:54.289 trsvcid: 4420 00:08:54.289 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:54.289 traddr: 192.168.100.8 00:08:54.289 eflags: none 00:08:54.289 rdma_prtype: not specified 00:08:54.289 rdma_qptype: connected 00:08:54.289 rdma_cms: rdma-cm 00:08:54.289 rdma_pkey: 0x0000 00:08:54.289 =====Discovery Log Entry 4====== 00:08:54.289 trtype: rdma 00:08:54.289 adrfam: ipv4 00:08:54.289 subtype: nvme subsystem 00:08:54.289 treq: not required 00:08:54.289 portid: 0 00:08:54.289 trsvcid: 4420 00:08:54.289 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:54.289 traddr: 192.168.100.8 00:08:54.289 eflags: none 00:08:54.289 rdma_prtype: not specified 00:08:54.289 rdma_qptype: connected 00:08:54.289 rdma_cms: rdma-cm 00:08:54.289 rdma_pkey: 0x0000 00:08:54.289 =====Discovery Log Entry 5====== 00:08:54.289 trtype: rdma 00:08:54.289 adrfam: ipv4 00:08:54.289 subtype: discovery subsystem referral 00:08:54.289 treq: not required 00:08:54.289 portid: 0 00:08:54.289 trsvcid: 4430 00:08:54.289 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:54.289 traddr: 192.168.100.8 00:08:54.289 eflags: none 00:08:54.289 rdma_prtype: unrecognized 00:08:54.289 rdma_qptype: unrecognized 00:08:54.289 rdma_cms: unrecognized 00:08:54.289 rdma_pkey: 0x0000 00:08:54.289 06:49:15 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:54.289 Perform nvmf subsystem discovery via RPC 00:08:54.289 06:49:15 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:54.289 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.289 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.289 [2024-12-15 06:49:15.882843] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:54.289 [ 00:08:54.289 { 00:08:54.289 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:54.289 "subtype": "Discovery", 00:08:54.289 "listen_addresses": [ 00:08:54.289 { 00:08:54.289 "transport": "RDMA", 00:08:54.289 "trtype": "RDMA", 00:08:54.289 "adrfam": "IPv4", 00:08:54.289 "traddr": "192.168.100.8", 00:08:54.289 "trsvcid": "4420" 00:08:54.289 } 00:08:54.289 ], 00:08:54.289 "allow_any_host": true, 00:08:54.289 "hosts": [] 00:08:54.289 }, 00:08:54.289 { 00:08:54.289 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.289 "subtype": "NVMe", 00:08:54.289 "listen_addresses": [ 00:08:54.289 { 00:08:54.289 "transport": "RDMA", 00:08:54.289 "trtype": "RDMA", 00:08:54.289 "adrfam": "IPv4", 00:08:54.289 "traddr": "192.168.100.8", 00:08:54.289 "trsvcid": "4420" 00:08:54.289 } 00:08:54.289 ], 00:08:54.289 "allow_any_host": true, 00:08:54.289 "hosts": [], 00:08:54.289 "serial_number": "SPDK00000000000001", 00:08:54.289 "model_number": "SPDK bdev Controller", 00:08:54.289 "max_namespaces": 32, 00:08:54.289 "min_cntlid": 1, 00:08:54.289 "max_cntlid": 65519, 00:08:54.289 "namespaces": [ 00:08:54.289 { 00:08:54.289 "nsid": 1, 00:08:54.289 "bdev_name": "Null1", 00:08:54.289 "name": "Null1", 00:08:54.289 "nguid": "39971349C4F4444081F568F042E67993", 00:08:54.289 "uuid": "39971349-c4f4-4440-81f5-68f042e67993" 00:08:54.289 } 00:08:54.289 ] 00:08:54.289 }, 00:08:54.289 { 00:08:54.289 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:54.289 "subtype": "NVMe", 00:08:54.289 "listen_addresses": [ 00:08:54.289 { 00:08:54.289 "transport": "RDMA", 00:08:54.289 "trtype": "RDMA", 00:08:54.289 "adrfam": "IPv4", 00:08:54.289 "traddr": "192.168.100.8", 00:08:54.289 "trsvcid": "4420" 00:08:54.289 } 00:08:54.289 ], 00:08:54.289 "allow_any_host": true, 00:08:54.289 "hosts": [], 00:08:54.289 "serial_number": "SPDK00000000000002", 00:08:54.289 "model_number": "SPDK bdev Controller", 00:08:54.289 "max_namespaces": 32, 00:08:54.289 "min_cntlid": 1, 00:08:54.289 "max_cntlid": 65519, 00:08:54.289 "namespaces": [ 00:08:54.289 { 00:08:54.289 "nsid": 1, 00:08:54.289 "bdev_name": "Null2", 00:08:54.289 "name": "Null2", 00:08:54.289 "nguid": "C453E54D87A74FCAA40A6BB385E51D6B", 00:08:54.289 "uuid": "c453e54d-87a7-4fca-a40a-6bb385e51d6b" 00:08:54.289 } 00:08:54.289 ] 00:08:54.289 }, 00:08:54.289 { 00:08:54.289 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:54.289 "subtype": "NVMe", 00:08:54.289 "listen_addresses": [ 00:08:54.289 { 00:08:54.289 "transport": "RDMA", 00:08:54.289 "trtype": "RDMA", 00:08:54.289 "adrfam": "IPv4", 00:08:54.289 "traddr": "192.168.100.8", 00:08:54.289 "trsvcid": "4420" 00:08:54.289 } 00:08:54.289 ], 00:08:54.289 "allow_any_host": true, 00:08:54.289 "hosts": [], 00:08:54.289 "serial_number": "SPDK00000000000003", 00:08:54.289 "model_number": "SPDK bdev Controller", 00:08:54.289 "max_namespaces": 32, 00:08:54.289 "min_cntlid": 1, 00:08:54.289 "max_cntlid": 65519, 00:08:54.289 "namespaces": [ 00:08:54.289 { 00:08:54.289 "nsid": 1, 00:08:54.289 "bdev_name": "Null3", 00:08:54.289 "name": "Null3", 00:08:54.289 "nguid": "89EB1ED5E1804464949E21D5642B088F", 00:08:54.289 "uuid": "89eb1ed5-e180-4464-949e-21d5642b088f" 00:08:54.289 } 00:08:54.289 ] 00:08:54.289 }, 00:08:54.289 { 00:08:54.289 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:54.289 "subtype": "NVMe", 00:08:54.289 "listen_addresses": [ 00:08:54.289 { 00:08:54.289 "transport": "RDMA", 00:08:54.289 "trtype": "RDMA", 00:08:54.289 "adrfam": "IPv4", 00:08:54.289 "traddr": "192.168.100.8", 00:08:54.289 "trsvcid": "4420" 00:08:54.289 } 00:08:54.289 ], 00:08:54.289 "allow_any_host": true, 00:08:54.289 "hosts": [], 00:08:54.289 "serial_number": "SPDK00000000000004", 00:08:54.289 "model_number": "SPDK bdev Controller", 00:08:54.289 "max_namespaces": 32, 00:08:54.289 "min_cntlid": 1, 00:08:54.289 "max_cntlid": 65519, 00:08:54.289 "namespaces": [ 00:08:54.289 { 00:08:54.289 "nsid": 1, 00:08:54.289 "bdev_name": "Null4", 00:08:54.289 "name": "Null4", 00:08:54.289 "nguid": "598654C32C944651A482C45EE86EE0B4", 00:08:54.289 "uuid": "598654c3-2c94-4651-a482-c45ee86ee0b4" 00:08:54.289 } 00:08:54.289 ] 00:08:54.289 } 00:08:54.289 ] 00:08:54.289 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.289 06:49:15 -- target/discovery.sh@42 -- # seq 1 4 00:08:54.289 06:49:15 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:54.289 06:49:15 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.289 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.289 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.548 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.548 06:49:15 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:54.548 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.548 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.548 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.548 06:49:15 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:54.548 06:49:15 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:54.548 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.548 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.548 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.548 06:49:15 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:54.548 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.548 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.548 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.548 06:49:15 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:54.548 06:49:15 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:54.548 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.548 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.548 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.548 06:49:15 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:54.548 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.548 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.548 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.548 06:49:15 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:54.548 06:49:15 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:54.548 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.548 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.548 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.548 06:49:15 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:54.548 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.548 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.548 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.548 06:49:15 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:54.548 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.548 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.548 06:49:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.548 06:49:15 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:54.548 06:49:15 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:54.548 06:49:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.548 06:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:54.548 06:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.548 06:49:16 -- target/discovery.sh@49 -- # check_bdevs= 00:08:54.548 06:49:16 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:54.548 06:49:16 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:54.548 06:49:16 -- target/discovery.sh@57 -- # nvmftestfini 00:08:54.548 06:49:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:54.548 06:49:16 -- nvmf/common.sh@116 -- # sync 00:08:54.548 06:49:16 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:54.548 06:49:16 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:54.548 06:49:16 -- nvmf/common.sh@119 -- # set +e 00:08:54.548 06:49:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:54.548 06:49:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:54.548 rmmod nvme_rdma 00:08:54.548 rmmod nvme_fabrics 00:08:54.548 06:49:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:54.548 06:49:16 -- nvmf/common.sh@123 -- # set -e 00:08:54.548 06:49:16 -- nvmf/common.sh@124 -- # return 0 00:08:54.548 06:49:16 -- nvmf/common.sh@477 -- # '[' -n 1221142 ']' 00:08:54.548 06:49:16 -- nvmf/common.sh@478 -- # killprocess 1221142 00:08:54.548 06:49:16 -- common/autotest_common.sh@936 -- # '[' -z 1221142 ']' 00:08:54.548 06:49:16 -- common/autotest_common.sh@940 -- # kill -0 1221142 00:08:54.548 06:49:16 -- common/autotest_common.sh@941 -- # uname 00:08:54.548 06:49:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:54.548 06:49:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1221142 00:08:54.548 06:49:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:54.548 06:49:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:54.548 06:49:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1221142' 00:08:54.548 killing process with pid 1221142 00:08:54.549 06:49:16 -- common/autotest_common.sh@955 -- # kill 1221142 00:08:54.549 [2024-12-15 06:49:16.154366] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:54.549 06:49:16 -- common/autotest_common.sh@960 -- # wait 1221142 00:08:54.807 06:49:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:54.807 06:49:16 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:54.807 00:08:54.807 real 0m8.949s 00:08:54.807 user 0m8.880s 00:08:54.807 sys 0m5.736s 00:08:54.807 06:49:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:54.807 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:08:54.807 ************************************ 00:08:54.807 END TEST nvmf_discovery 00:08:54.807 ************************************ 00:08:54.807 06:49:16 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:54.807 06:49:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:54.807 06:49:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.807 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:08:55.067 ************************************ 00:08:55.067 START TEST nvmf_referrals 00:08:55.067 ************************************ 00:08:55.067 06:49:16 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:55.067 * Looking for test storage... 00:08:55.067 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:55.067 06:49:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:55.067 06:49:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:55.067 06:49:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:55.067 06:49:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:55.067 06:49:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:55.067 06:49:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:55.067 06:49:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:55.067 06:49:16 -- scripts/common.sh@335 -- # IFS=.-: 00:08:55.067 06:49:16 -- scripts/common.sh@335 -- # read -ra ver1 00:08:55.067 06:49:16 -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.067 06:49:16 -- scripts/common.sh@336 -- # read -ra ver2 00:08:55.067 06:49:16 -- scripts/common.sh@337 -- # local 'op=<' 00:08:55.067 06:49:16 -- scripts/common.sh@339 -- # ver1_l=2 00:08:55.067 06:49:16 -- scripts/common.sh@340 -- # ver2_l=1 00:08:55.067 06:49:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:55.067 06:49:16 -- scripts/common.sh@343 -- # case "$op" in 00:08:55.067 06:49:16 -- scripts/common.sh@344 -- # : 1 00:08:55.067 06:49:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:55.067 06:49:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.067 06:49:16 -- scripts/common.sh@364 -- # decimal 1 00:08:55.067 06:49:16 -- scripts/common.sh@352 -- # local d=1 00:08:55.067 06:49:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.067 06:49:16 -- scripts/common.sh@354 -- # echo 1 00:08:55.067 06:49:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:55.067 06:49:16 -- scripts/common.sh@365 -- # decimal 2 00:08:55.067 06:49:16 -- scripts/common.sh@352 -- # local d=2 00:08:55.067 06:49:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.067 06:49:16 -- scripts/common.sh@354 -- # echo 2 00:08:55.067 06:49:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:55.067 06:49:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:55.067 06:49:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:55.067 06:49:16 -- scripts/common.sh@367 -- # return 0 00:08:55.067 06:49:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.067 06:49:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:55.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.067 --rc genhtml_branch_coverage=1 00:08:55.067 --rc genhtml_function_coverage=1 00:08:55.067 --rc genhtml_legend=1 00:08:55.067 --rc geninfo_all_blocks=1 00:08:55.067 --rc geninfo_unexecuted_blocks=1 00:08:55.067 00:08:55.067 ' 00:08:55.067 06:49:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:55.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.067 --rc genhtml_branch_coverage=1 00:08:55.067 --rc genhtml_function_coverage=1 00:08:55.067 --rc genhtml_legend=1 00:08:55.067 --rc geninfo_all_blocks=1 00:08:55.067 --rc geninfo_unexecuted_blocks=1 00:08:55.067 00:08:55.067 ' 00:08:55.067 06:49:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:55.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.067 --rc genhtml_branch_coverage=1 00:08:55.067 --rc genhtml_function_coverage=1 00:08:55.067 --rc genhtml_legend=1 00:08:55.067 --rc geninfo_all_blocks=1 00:08:55.067 --rc geninfo_unexecuted_blocks=1 00:08:55.067 00:08:55.067 ' 00:08:55.067 06:49:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:55.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.067 --rc genhtml_branch_coverage=1 00:08:55.067 --rc genhtml_function_coverage=1 00:08:55.067 --rc genhtml_legend=1 00:08:55.067 --rc geninfo_all_blocks=1 00:08:55.067 --rc geninfo_unexecuted_blocks=1 00:08:55.067 00:08:55.067 ' 00:08:55.067 06:49:16 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.067 06:49:16 -- nvmf/common.sh@7 -- # uname -s 00:08:55.067 06:49:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.067 06:49:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.067 06:49:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.067 06:49:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.067 06:49:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.067 06:49:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.067 06:49:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.067 06:49:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.067 06:49:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.067 06:49:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.067 06:49:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:55.067 06:49:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:55.067 06:49:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.067 06:49:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.067 06:49:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.067 06:49:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:55.067 06:49:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.067 06:49:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.067 06:49:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.067 06:49:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.068 06:49:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.068 06:49:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.068 06:49:16 -- paths/export.sh@5 -- # export PATH 00:08:55.068 06:49:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.068 06:49:16 -- nvmf/common.sh@46 -- # : 0 00:08:55.068 06:49:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:55.068 06:49:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:55.068 06:49:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:55.068 06:49:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.068 06:49:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.068 06:49:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:55.068 06:49:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:55.068 06:49:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:55.068 06:49:16 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:55.068 06:49:16 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:55.068 06:49:16 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:55.068 06:49:16 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:55.068 06:49:16 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:55.068 06:49:16 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:55.068 06:49:16 -- target/referrals.sh@37 -- # nvmftestinit 00:08:55.068 06:49:16 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:55.068 06:49:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.068 06:49:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:55.068 06:49:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:55.068 06:49:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:55.068 06:49:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.068 06:49:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.068 06:49:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.068 06:49:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:55.068 06:49:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:55.068 06:49:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:55.068 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:09:01.637 06:49:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:01.637 06:49:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:01.637 06:49:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:01.637 06:49:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:01.637 06:49:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:01.637 06:49:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:01.637 06:49:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:01.637 06:49:23 -- nvmf/common.sh@294 -- # net_devs=() 00:09:01.637 06:49:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:01.637 06:49:23 -- nvmf/common.sh@295 -- # e810=() 00:09:01.637 06:49:23 -- nvmf/common.sh@295 -- # local -ga e810 00:09:01.637 06:49:23 -- nvmf/common.sh@296 -- # x722=() 00:09:01.637 06:49:23 -- nvmf/common.sh@296 -- # local -ga x722 00:09:01.637 06:49:23 -- nvmf/common.sh@297 -- # mlx=() 00:09:01.637 06:49:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:01.637 06:49:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.637 06:49:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.637 06:49:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.637 06:49:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.637 06:49:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.637 06:49:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.637 06:49:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.637 06:49:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.637 06:49:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.637 06:49:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.637 06:49:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.637 06:49:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:01.637 06:49:23 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:01.637 06:49:23 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:01.637 06:49:23 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:01.637 06:49:23 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:01.637 06:49:23 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:01.637 06:49:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:01.637 06:49:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:01.637 06:49:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:01.637 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:01.637 06:49:23 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:01.637 06:49:23 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:01.637 06:49:23 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:01.637 06:49:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:01.637 06:49:23 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:01.637 06:49:23 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:01.637 06:49:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:01.637 06:49:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:01.637 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:01.638 06:49:23 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:01.638 06:49:23 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:01.638 06:49:23 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:01.638 06:49:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:01.638 06:49:23 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:01.638 06:49:23 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:01.638 06:49:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:01.638 06:49:23 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:01.638 06:49:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:01.638 06:49:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.638 06:49:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:01.638 06:49:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.638 06:49:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:01.638 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:01.638 06:49:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.638 06:49:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:01.638 06:49:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.638 06:49:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:01.638 06:49:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.638 06:49:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:01.638 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:01.638 06:49:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.638 06:49:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:01.638 06:49:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:01.638 06:49:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:01.638 06:49:23 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:01.638 06:49:23 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:01.638 06:49:23 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:01.638 06:49:23 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:01.638 06:49:23 -- nvmf/common.sh@57 -- # uname 00:09:01.638 06:49:23 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:01.638 06:49:23 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:01.638 06:49:23 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:01.638 06:49:23 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:01.638 06:49:23 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:01.898 06:49:23 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:01.898 06:49:23 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:01.898 06:49:23 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:01.898 06:49:23 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:01.898 06:49:23 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:01.898 06:49:23 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:01.898 06:49:23 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:01.898 06:49:23 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:01.898 06:49:23 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:01.898 06:49:23 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:01.898 06:49:23 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:01.898 06:49:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:01.898 06:49:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:01.898 06:49:23 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:01.898 06:49:23 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:01.898 06:49:23 -- nvmf/common.sh@104 -- # continue 2 00:09:01.898 06:49:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:01.898 06:49:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:01.898 06:49:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:01.898 06:49:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:01.898 06:49:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:01.898 06:49:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:01.898 06:49:23 -- nvmf/common.sh@104 -- # continue 2 00:09:01.898 06:49:23 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:01.898 06:49:23 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:01.898 06:49:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:01.898 06:49:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:01.898 06:49:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:01.898 06:49:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:01.898 06:49:23 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:01.898 06:49:23 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:01.898 06:49:23 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:01.898 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:01.898 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:01.898 altname enp217s0f0np0 00:09:01.898 altname ens818f0np0 00:09:01.898 inet 192.168.100.8/24 scope global mlx_0_0 00:09:01.898 valid_lft forever preferred_lft forever 00:09:01.898 06:49:23 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:01.898 06:49:23 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:01.898 06:49:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:01.898 06:49:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:01.898 06:49:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:01.898 06:49:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:01.898 06:49:23 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:01.898 06:49:23 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:01.898 06:49:23 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:01.898 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:01.898 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:01.898 altname enp217s0f1np1 00:09:01.898 altname ens818f1np1 00:09:01.898 inet 192.168.100.9/24 scope global mlx_0_1 00:09:01.898 valid_lft forever preferred_lft forever 00:09:01.898 06:49:23 -- nvmf/common.sh@410 -- # return 0 00:09:01.898 06:49:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:01.898 06:49:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:01.898 06:49:23 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:01.898 06:49:23 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:01.898 06:49:23 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:01.898 06:49:23 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:01.898 06:49:23 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:01.899 06:49:23 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:01.899 06:49:23 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:01.899 06:49:23 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:01.899 06:49:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:01.899 06:49:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:01.899 06:49:23 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:01.899 06:49:23 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:01.899 06:49:23 -- nvmf/common.sh@104 -- # continue 2 00:09:01.899 06:49:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:01.899 06:49:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:01.899 06:49:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:01.899 06:49:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:01.899 06:49:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:01.899 06:49:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:01.899 06:49:23 -- nvmf/common.sh@104 -- # continue 2 00:09:01.899 06:49:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:01.899 06:49:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:01.899 06:49:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:01.899 06:49:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:01.899 06:49:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:01.899 06:49:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:01.899 06:49:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:01.899 06:49:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:01.899 06:49:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:01.899 06:49:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:01.899 06:49:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:01.899 06:49:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:01.899 06:49:23 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:01.899 192.168.100.9' 00:09:01.899 06:49:23 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:01.899 192.168.100.9' 00:09:01.899 06:49:23 -- nvmf/common.sh@445 -- # head -n 1 00:09:01.899 06:49:23 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:01.899 06:49:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:01.899 192.168.100.9' 00:09:01.899 06:49:23 -- nvmf/common.sh@446 -- # tail -n +2 00:09:01.899 06:49:23 -- nvmf/common.sh@446 -- # head -n 1 00:09:01.899 06:49:23 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:01.899 06:49:23 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:01.899 06:49:23 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:01.899 06:49:23 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:01.899 06:49:23 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:01.899 06:49:23 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:01.899 06:49:23 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:01.899 06:49:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:01.899 06:49:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:01.899 06:49:23 -- common/autotest_common.sh@10 -- # set +x 00:09:01.899 06:49:23 -- nvmf/common.sh@469 -- # nvmfpid=1224865 00:09:01.899 06:49:23 -- nvmf/common.sh@470 -- # waitforlisten 1224865 00:09:01.899 06:49:23 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.899 06:49:23 -- common/autotest_common.sh@829 -- # '[' -z 1224865 ']' 00:09:01.899 06:49:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.899 06:49:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.899 06:49:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.899 06:49:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.899 06:49:23 -- common/autotest_common.sh@10 -- # set +x 00:09:02.158 [2024-12-15 06:49:23.549799] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:02.158 [2024-12-15 06:49:23.549853] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.158 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.158 [2024-12-15 06:49:23.624149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.158 [2024-12-15 06:49:23.663566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:02.158 [2024-12-15 06:49:23.663678] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.158 [2024-12-15 06:49:23.663689] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.158 [2024-12-15 06:49:23.663699] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.158 [2024-12-15 06:49:23.663745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.158 [2024-12-15 06:49:23.663839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.158 [2024-12-15 06:49:23.663906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.158 [2024-12-15 06:49:23.663908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.095 06:49:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.095 06:49:24 -- common/autotest_common.sh@862 -- # return 0 00:09:03.095 06:49:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:03.095 06:49:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.095 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.095 06:49:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.095 06:49:24 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:03.095 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.095 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.095 [2024-12-15 06:49:24.445627] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6470d0/0x64b5a0) succeed. 00:09:03.095 [2024-12-15 06:49:24.454863] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x648670/0x68cc40) succeed. 00:09:03.095 06:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.095 06:49:24 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:09:03.095 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.095 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.095 [2024-12-15 06:49:24.577948] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:03.095 06:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.095 06:49:24 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:03.095 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.095 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.095 06:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.095 06:49:24 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:03.095 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.095 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.095 06:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.095 06:49:24 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:03.095 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.095 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.095 06:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.095 06:49:24 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:03.095 06:49:24 -- target/referrals.sh@48 -- # jq length 00:09:03.095 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.095 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.095 06:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.095 06:49:24 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:03.095 06:49:24 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:03.095 06:49:24 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:03.095 06:49:24 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:03.095 06:49:24 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:03.095 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.095 06:49:24 -- target/referrals.sh@21 -- # sort 00:09:03.095 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.095 06:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.095 06:49:24 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:03.095 06:49:24 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:03.095 06:49:24 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:03.095 06:49:24 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:03.095 06:49:24 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:03.095 06:49:24 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:03.095 06:49:24 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:03.095 06:49:24 -- target/referrals.sh@26 -- # sort 00:09:03.354 06:49:24 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:03.354 06:49:24 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:03.354 06:49:24 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:03.354 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.354 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 06:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.354 06:49:24 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:03.354 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.354 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 06:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.354 06:49:24 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:03.354 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.354 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 06:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.354 06:49:24 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:03.354 06:49:24 -- target/referrals.sh@56 -- # jq length 00:09:03.354 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.354 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 06:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.354 06:49:24 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:03.354 06:49:24 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:03.354 06:49:24 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:03.354 06:49:24 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:03.354 06:49:24 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:03.354 06:49:24 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:03.354 06:49:24 -- target/referrals.sh@26 -- # sort 00:09:03.354 06:49:24 -- target/referrals.sh@26 -- # echo 00:09:03.354 06:49:24 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:03.354 06:49:24 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:03.354 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.354 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.354 06:49:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.354 06:49:24 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:03.354 06:49:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.354 06:49:24 -- common/autotest_common.sh@10 -- # set +x 00:09:03.613 06:49:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.613 06:49:25 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:03.613 06:49:25 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:03.613 06:49:25 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:03.613 06:49:25 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:03.613 06:49:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.613 06:49:25 -- target/referrals.sh@21 -- # sort 00:09:03.613 06:49:25 -- common/autotest_common.sh@10 -- # set +x 00:09:03.613 06:49:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.613 06:49:25 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:03.613 06:49:25 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:03.613 06:49:25 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:03.613 06:49:25 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:03.613 06:49:25 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:03.613 06:49:25 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:03.613 06:49:25 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:03.613 06:49:25 -- target/referrals.sh@26 -- # sort 00:09:03.613 06:49:25 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:03.613 06:49:25 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:03.613 06:49:25 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:03.613 06:49:25 -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:03.613 06:49:25 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:03.613 06:49:25 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:03.613 06:49:25 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:03.871 06:49:25 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:03.871 06:49:25 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:03.871 06:49:25 -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:03.871 06:49:25 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:03.871 06:49:25 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:03.871 06:49:25 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:03.871 06:49:25 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:03.871 06:49:25 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:03.871 06:49:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.871 06:49:25 -- common/autotest_common.sh@10 -- # set +x 00:09:03.871 06:49:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.871 06:49:25 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:03.871 06:49:25 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:03.871 06:49:25 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:03.871 06:49:25 -- target/referrals.sh@21 -- # sort 00:09:03.871 06:49:25 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:03.871 06:49:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.871 06:49:25 -- common/autotest_common.sh@10 -- # set +x 00:09:03.871 06:49:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.871 06:49:25 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:03.871 06:49:25 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:03.871 06:49:25 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:03.871 06:49:25 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:03.871 06:49:25 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:03.871 06:49:25 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:03.871 06:49:25 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:03.871 06:49:25 -- target/referrals.sh@26 -- # sort 00:09:04.129 06:49:25 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:04.129 06:49:25 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:04.129 06:49:25 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:04.129 06:49:25 -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:04.129 06:49:25 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:04.129 06:49:25 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:04.129 06:49:25 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:04.129 06:49:25 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:04.129 06:49:25 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:04.129 06:49:25 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:04.129 06:49:25 -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:04.129 06:49:25 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:04.129 06:49:25 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:04.129 06:49:25 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:04.129 06:49:25 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:04.129 06:49:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.129 06:49:25 -- common/autotest_common.sh@10 -- # set +x 00:09:04.388 06:49:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.388 06:49:25 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:04.388 06:49:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.388 06:49:25 -- common/autotest_common.sh@10 -- # set +x 00:09:04.388 06:49:25 -- target/referrals.sh@82 -- # jq length 00:09:04.388 06:49:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.388 06:49:25 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:04.388 06:49:25 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:04.388 06:49:25 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:04.388 06:49:25 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:04.388 06:49:25 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:04.388 06:49:25 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:04.388 06:49:25 -- target/referrals.sh@26 -- # sort 00:09:04.388 06:49:25 -- target/referrals.sh@26 -- # echo 00:09:04.388 06:49:25 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:04.388 06:49:25 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:04.388 06:49:25 -- target/referrals.sh@86 -- # nvmftestfini 00:09:04.388 06:49:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:04.388 06:49:25 -- nvmf/common.sh@116 -- # sync 00:09:04.388 06:49:25 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:09:04.388 06:49:25 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:09:04.388 06:49:25 -- nvmf/common.sh@119 -- # set +e 00:09:04.388 06:49:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:04.389 06:49:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:09:04.389 rmmod nvme_rdma 00:09:04.389 rmmod nvme_fabrics 00:09:04.389 06:49:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:04.389 06:49:25 -- nvmf/common.sh@123 -- # set -e 00:09:04.389 06:49:25 -- nvmf/common.sh@124 -- # return 0 00:09:04.389 06:49:25 -- nvmf/common.sh@477 -- # '[' -n 1224865 ']' 00:09:04.389 06:49:25 -- nvmf/common.sh@478 -- # killprocess 1224865 00:09:04.389 06:49:25 -- common/autotest_common.sh@936 -- # '[' -z 1224865 ']' 00:09:04.389 06:49:25 -- common/autotest_common.sh@940 -- # kill -0 1224865 00:09:04.389 06:49:25 -- common/autotest_common.sh@941 -- # uname 00:09:04.389 06:49:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:04.389 06:49:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1224865 00:09:04.649 06:49:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:04.649 06:49:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:04.649 06:49:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1224865' 00:09:04.649 killing process with pid 1224865 00:09:04.649 06:49:26 -- common/autotest_common.sh@955 -- # kill 1224865 00:09:04.649 06:49:26 -- common/autotest_common.sh@960 -- # wait 1224865 00:09:04.649 06:49:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:04.649 06:49:26 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:09:04.649 00:09:04.649 real 0m9.832s 00:09:04.649 user 0m13.257s 00:09:04.649 sys 0m6.023s 00:09:04.649 06:49:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:04.649 06:49:26 -- common/autotest_common.sh@10 -- # set +x 00:09:04.649 ************************************ 00:09:04.649 END TEST nvmf_referrals 00:09:04.649 ************************************ 00:09:04.909 06:49:26 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:04.909 06:49:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:04.909 06:49:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.909 06:49:26 -- common/autotest_common.sh@10 -- # set +x 00:09:04.909 ************************************ 00:09:04.909 START TEST nvmf_connect_disconnect 00:09:04.909 ************************************ 00:09:04.909 06:49:26 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:04.909 * Looking for test storage... 00:09:04.909 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:04.909 06:49:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:04.909 06:49:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:04.909 06:49:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:04.909 06:49:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:04.909 06:49:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:04.909 06:49:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:04.909 06:49:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:04.909 06:49:26 -- scripts/common.sh@335 -- # IFS=.-: 00:09:04.909 06:49:26 -- scripts/common.sh@335 -- # read -ra ver1 00:09:04.909 06:49:26 -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.909 06:49:26 -- scripts/common.sh@336 -- # read -ra ver2 00:09:04.909 06:49:26 -- scripts/common.sh@337 -- # local 'op=<' 00:09:04.909 06:49:26 -- scripts/common.sh@339 -- # ver1_l=2 00:09:04.909 06:49:26 -- scripts/common.sh@340 -- # ver2_l=1 00:09:04.909 06:49:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:04.909 06:49:26 -- scripts/common.sh@343 -- # case "$op" in 00:09:04.909 06:49:26 -- scripts/common.sh@344 -- # : 1 00:09:04.909 06:49:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:04.909 06:49:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.909 06:49:26 -- scripts/common.sh@364 -- # decimal 1 00:09:04.909 06:49:26 -- scripts/common.sh@352 -- # local d=1 00:09:04.909 06:49:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.909 06:49:26 -- scripts/common.sh@354 -- # echo 1 00:09:04.909 06:49:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:04.909 06:49:26 -- scripts/common.sh@365 -- # decimal 2 00:09:04.909 06:49:26 -- scripts/common.sh@352 -- # local d=2 00:09:04.909 06:49:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.909 06:49:26 -- scripts/common.sh@354 -- # echo 2 00:09:04.909 06:49:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:04.909 06:49:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:04.909 06:49:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:04.909 06:49:26 -- scripts/common.sh@367 -- # return 0 00:09:04.909 06:49:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.909 06:49:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:04.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.909 --rc genhtml_branch_coverage=1 00:09:04.909 --rc genhtml_function_coverage=1 00:09:04.909 --rc genhtml_legend=1 00:09:04.909 --rc geninfo_all_blocks=1 00:09:04.909 --rc geninfo_unexecuted_blocks=1 00:09:04.909 00:09:04.909 ' 00:09:04.909 06:49:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:04.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.909 --rc genhtml_branch_coverage=1 00:09:04.909 --rc genhtml_function_coverage=1 00:09:04.909 --rc genhtml_legend=1 00:09:04.909 --rc geninfo_all_blocks=1 00:09:04.909 --rc geninfo_unexecuted_blocks=1 00:09:04.909 00:09:04.909 ' 00:09:04.909 06:49:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:04.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.909 --rc genhtml_branch_coverage=1 00:09:04.909 --rc genhtml_function_coverage=1 00:09:04.909 --rc genhtml_legend=1 00:09:04.909 --rc geninfo_all_blocks=1 00:09:04.909 --rc geninfo_unexecuted_blocks=1 00:09:04.909 00:09:04.909 ' 00:09:04.909 06:49:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:04.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.909 --rc genhtml_branch_coverage=1 00:09:04.909 --rc genhtml_function_coverage=1 00:09:04.909 --rc genhtml_legend=1 00:09:04.909 --rc geninfo_all_blocks=1 00:09:04.909 --rc geninfo_unexecuted_blocks=1 00:09:04.909 00:09:04.909 ' 00:09:04.909 06:49:26 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.909 06:49:26 -- nvmf/common.sh@7 -- # uname -s 00:09:04.909 06:49:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.909 06:49:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.909 06:49:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.909 06:49:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.909 06:49:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.909 06:49:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.909 06:49:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.909 06:49:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.909 06:49:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.909 06:49:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.909 06:49:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:04.909 06:49:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:04.909 06:49:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.909 06:49:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.909 06:49:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.909 06:49:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:04.909 06:49:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.909 06:49:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.909 06:49:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.909 06:49:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.909 06:49:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.909 06:49:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.909 06:49:26 -- paths/export.sh@5 -- # export PATH 00:09:05.168 06:49:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.168 06:49:26 -- nvmf/common.sh@46 -- # : 0 00:09:05.168 06:49:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:05.168 06:49:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:05.168 06:49:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:05.168 06:49:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.168 06:49:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.168 06:49:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:05.168 06:49:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:05.168 06:49:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:05.168 06:49:26 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.168 06:49:26 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.168 06:49:26 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:05.168 06:49:26 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:05.168 06:49:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.168 06:49:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:05.168 06:49:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:05.168 06:49:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:05.168 06:49:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.168 06:49:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.168 06:49:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.168 06:49:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:05.168 06:49:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:05.168 06:49:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:05.168 06:49:26 -- common/autotest_common.sh@10 -- # set +x 00:09:11.813 06:49:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:11.813 06:49:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:11.813 06:49:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:11.813 06:49:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:11.813 06:49:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:11.813 06:49:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:11.813 06:49:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:11.813 06:49:33 -- nvmf/common.sh@294 -- # net_devs=() 00:09:11.813 06:49:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:11.813 06:49:33 -- nvmf/common.sh@295 -- # e810=() 00:09:11.813 06:49:33 -- nvmf/common.sh@295 -- # local -ga e810 00:09:11.813 06:49:33 -- nvmf/common.sh@296 -- # x722=() 00:09:11.813 06:49:33 -- nvmf/common.sh@296 -- # local -ga x722 00:09:11.813 06:49:33 -- nvmf/common.sh@297 -- # mlx=() 00:09:11.813 06:49:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:11.813 06:49:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.813 06:49:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.813 06:49:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.813 06:49:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.813 06:49:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.813 06:49:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.813 06:49:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.813 06:49:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.813 06:49:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.813 06:49:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.813 06:49:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.813 06:49:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:11.813 06:49:33 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:11.813 06:49:33 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:11.813 06:49:33 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:11.813 06:49:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:11.813 06:49:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:11.813 06:49:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:11.813 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:11.813 06:49:33 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:11.813 06:49:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:11.813 06:49:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:11.813 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:11.813 06:49:33 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:11.813 06:49:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:11.813 06:49:33 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:11.813 06:49:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.813 06:49:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:11.813 06:49:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.813 06:49:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:11.813 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:11.813 06:49:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.813 06:49:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:11.813 06:49:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.813 06:49:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:11.813 06:49:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.813 06:49:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:11.813 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:11.813 06:49:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.813 06:49:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:11.813 06:49:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:11.813 06:49:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:11.813 06:49:33 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:11.813 06:49:33 -- nvmf/common.sh@57 -- # uname 00:09:11.813 06:49:33 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:11.813 06:49:33 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:11.813 06:49:33 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:11.813 06:49:33 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:11.813 06:49:33 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:11.813 06:49:33 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:11.813 06:49:33 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:11.813 06:49:33 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:11.813 06:49:33 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:11.813 06:49:33 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:11.813 06:49:33 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:11.813 06:49:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:11.813 06:49:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:11.813 06:49:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:11.813 06:49:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:11.813 06:49:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:11.813 06:49:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:11.813 06:49:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.813 06:49:33 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:11.813 06:49:33 -- nvmf/common.sh@104 -- # continue 2 00:09:11.813 06:49:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:11.813 06:49:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.813 06:49:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.813 06:49:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:11.813 06:49:33 -- nvmf/common.sh@104 -- # continue 2 00:09:11.813 06:49:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:11.813 06:49:33 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:11.813 06:49:33 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:11.813 06:49:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:11.813 06:49:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:11.813 06:49:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:11.813 06:49:33 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:11.813 06:49:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:11.813 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:11.813 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:11.813 altname enp217s0f0np0 00:09:11.813 altname ens818f0np0 00:09:11.813 inet 192.168.100.8/24 scope global mlx_0_0 00:09:11.813 valid_lft forever preferred_lft forever 00:09:11.813 06:49:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:11.813 06:49:33 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:11.813 06:49:33 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:11.813 06:49:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:11.813 06:49:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:11.813 06:49:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:11.813 06:49:33 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:11.813 06:49:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:11.813 06:49:33 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:11.814 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:11.814 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:11.814 altname enp217s0f1np1 00:09:11.814 altname ens818f1np1 00:09:11.814 inet 192.168.100.9/24 scope global mlx_0_1 00:09:11.814 valid_lft forever preferred_lft forever 00:09:11.814 06:49:33 -- nvmf/common.sh@410 -- # return 0 00:09:11.814 06:49:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:11.814 06:49:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:11.814 06:49:33 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:11.814 06:49:33 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:11.814 06:49:33 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:11.814 06:49:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:11.814 06:49:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:11.814 06:49:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:11.814 06:49:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:11.814 06:49:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:11.814 06:49:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:11.814 06:49:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.814 06:49:33 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:11.814 06:49:33 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:11.814 06:49:33 -- nvmf/common.sh@104 -- # continue 2 00:09:11.814 06:49:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:11.814 06:49:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.814 06:49:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:11.814 06:49:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.814 06:49:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:11.814 06:49:33 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:11.814 06:49:33 -- nvmf/common.sh@104 -- # continue 2 00:09:11.814 06:49:33 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:11.814 06:49:33 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:11.814 06:49:33 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:11.814 06:49:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:11.814 06:49:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:11.814 06:49:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:11.814 06:49:33 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:11.814 06:49:33 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:11.814 06:49:33 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:11.814 06:49:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:11.814 06:49:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:11.814 06:49:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:11.814 06:49:33 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:11.814 192.168.100.9' 00:09:11.814 06:49:33 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:11.814 192.168.100.9' 00:09:11.814 06:49:33 -- nvmf/common.sh@445 -- # head -n 1 00:09:11.814 06:49:33 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:11.814 06:49:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:11.814 192.168.100.9' 00:09:11.814 06:49:33 -- nvmf/common.sh@446 -- # tail -n +2 00:09:11.814 06:49:33 -- nvmf/common.sh@446 -- # head -n 1 00:09:11.814 06:49:33 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:11.814 06:49:33 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:11.814 06:49:33 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:11.814 06:49:33 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:11.814 06:49:33 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:11.814 06:49:33 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:11.814 06:49:33 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:11.814 06:49:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:11.814 06:49:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:11.814 06:49:33 -- common/autotest_common.sh@10 -- # set +x 00:09:11.814 06:49:33 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:11.814 06:49:33 -- nvmf/common.sh@469 -- # nvmfpid=1228746 00:09:11.814 06:49:33 -- nvmf/common.sh@470 -- # waitforlisten 1228746 00:09:11.814 06:49:33 -- common/autotest_common.sh@829 -- # '[' -z 1228746 ']' 00:09:11.814 06:49:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.814 06:49:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:11.814 06:49:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.814 06:49:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:11.814 06:49:33 -- common/autotest_common.sh@10 -- # set +x 00:09:11.814 [2024-12-15 06:49:33.343748] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:11.814 [2024-12-15 06:49:33.343806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.814 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.814 [2024-12-15 06:49:33.412475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.073 [2024-12-15 06:49:33.452411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:12.073 [2024-12-15 06:49:33.452519] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.073 [2024-12-15 06:49:33.452529] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.073 [2024-12-15 06:49:33.452539] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.073 [2024-12-15 06:49:33.452585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.073 [2024-12-15 06:49:33.452665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.073 [2024-12-15 06:49:33.452749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.073 [2024-12-15 06:49:33.452751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.640 06:49:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:12.640 06:49:34 -- common/autotest_common.sh@862 -- # return 0 00:09:12.640 06:49:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:12.640 06:49:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:12.640 06:49:34 -- common/autotest_common.sh@10 -- # set +x 00:09:12.640 06:49:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.640 06:49:34 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:12.640 06:49:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.640 06:49:34 -- common/autotest_common.sh@10 -- # set +x 00:09:12.640 [2024-12-15 06:49:34.238571] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:12.640 [2024-12-15 06:49:34.259400] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x107e0f0/0x10825c0) succeed. 00:09:12.640 [2024-12-15 06:49:34.268525] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x107f690/0x10c3c60) succeed. 00:09:12.898 06:49:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.898 06:49:34 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:12.898 06:49:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.898 06:49:34 -- common/autotest_common.sh@10 -- # set +x 00:09:12.898 06:49:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.898 06:49:34 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:12.898 06:49:34 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:12.898 06:49:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.898 06:49:34 -- common/autotest_common.sh@10 -- # set +x 00:09:12.898 06:49:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.898 06:49:34 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.898 06:49:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.898 06:49:34 -- common/autotest_common.sh@10 -- # set +x 00:09:12.898 06:49:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.898 06:49:34 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:12.898 06:49:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.898 06:49:34 -- common/autotest_common.sh@10 -- # set +x 00:09:12.898 [2024-12-15 06:49:34.408497] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:12.898 06:49:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.898 06:49:34 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:12.898 06:49:34 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:12.898 06:49:34 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:12.898 06:49:34 -- target/connect_disconnect.sh@34 -- # set +x 00:09:16.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.707 06:54:49 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:28.707 06:54:49 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:28.707 06:54:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:28.707 06:54:49 -- nvmf/common.sh@116 -- # sync 00:14:28.707 06:54:49 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:28.707 06:54:49 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:28.707 06:54:49 -- nvmf/common.sh@119 -- # set +e 00:14:28.707 06:54:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:28.707 06:54:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:28.707 rmmod nvme_rdma 00:14:28.707 rmmod nvme_fabrics 00:14:28.707 06:54:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:28.707 06:54:49 -- nvmf/common.sh@123 -- # set -e 00:14:28.707 06:54:49 -- nvmf/common.sh@124 -- # return 0 00:14:28.707 06:54:49 -- nvmf/common.sh@477 -- # '[' -n 1228746 ']' 00:14:28.707 06:54:49 -- nvmf/common.sh@478 -- # killprocess 1228746 00:14:28.707 06:54:49 -- common/autotest_common.sh@936 -- # '[' -z 1228746 ']' 00:14:28.707 06:54:49 -- common/autotest_common.sh@940 -- # kill -0 1228746 00:14:28.707 06:54:49 -- common/autotest_common.sh@941 -- # uname 00:14:28.707 06:54:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:28.707 06:54:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1228746 00:14:28.707 06:54:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:28.707 06:54:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:28.707 06:54:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1228746' 00:14:28.707 killing process with pid 1228746 00:14:28.707 06:54:49 -- common/autotest_common.sh@955 -- # kill 1228746 00:14:28.707 06:54:49 -- common/autotest_common.sh@960 -- # wait 1228746 00:14:28.707 06:54:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:28.707 06:54:50 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:28.707 00:14:28.707 real 5m23.701s 00:14:28.707 user 21m3.057s 00:14:28.707 sys 0m17.994s 00:14:28.707 06:54:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:28.707 06:54:50 -- common/autotest_common.sh@10 -- # set +x 00:14:28.707 ************************************ 00:14:28.707 END TEST nvmf_connect_disconnect 00:14:28.707 ************************************ 00:14:28.707 06:54:50 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:28.707 06:54:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:28.707 06:54:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.707 06:54:50 -- common/autotest_common.sh@10 -- # set +x 00:14:28.707 ************************************ 00:14:28.707 START TEST nvmf_multitarget 00:14:28.707 ************************************ 00:14:28.707 06:54:50 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:28.707 * Looking for test storage... 00:14:28.707 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:28.707 06:54:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:28.707 06:54:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:28.707 06:54:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:28.707 06:54:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:28.707 06:54:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:28.707 06:54:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:28.707 06:54:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:28.707 06:54:50 -- scripts/common.sh@335 -- # IFS=.-: 00:14:28.707 06:54:50 -- scripts/common.sh@335 -- # read -ra ver1 00:14:28.707 06:54:50 -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.707 06:54:50 -- scripts/common.sh@336 -- # read -ra ver2 00:14:28.707 06:54:50 -- scripts/common.sh@337 -- # local 'op=<' 00:14:28.707 06:54:50 -- scripts/common.sh@339 -- # ver1_l=2 00:14:28.707 06:54:50 -- scripts/common.sh@340 -- # ver2_l=1 00:14:28.707 06:54:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:28.707 06:54:50 -- scripts/common.sh@343 -- # case "$op" in 00:14:28.707 06:54:50 -- scripts/common.sh@344 -- # : 1 00:14:28.707 06:54:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:28.707 06:54:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.707 06:54:50 -- scripts/common.sh@364 -- # decimal 1 00:14:28.707 06:54:50 -- scripts/common.sh@352 -- # local d=1 00:14:28.707 06:54:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.707 06:54:50 -- scripts/common.sh@354 -- # echo 1 00:14:28.707 06:54:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:28.707 06:54:50 -- scripts/common.sh@365 -- # decimal 2 00:14:28.707 06:54:50 -- scripts/common.sh@352 -- # local d=2 00:14:28.707 06:54:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.707 06:54:50 -- scripts/common.sh@354 -- # echo 2 00:14:28.707 06:54:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:28.707 06:54:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:28.707 06:54:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:28.707 06:54:50 -- scripts/common.sh@367 -- # return 0 00:14:28.707 06:54:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.707 06:54:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:28.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.707 --rc genhtml_branch_coverage=1 00:14:28.707 --rc genhtml_function_coverage=1 00:14:28.707 --rc genhtml_legend=1 00:14:28.707 --rc geninfo_all_blocks=1 00:14:28.707 --rc geninfo_unexecuted_blocks=1 00:14:28.707 00:14:28.707 ' 00:14:28.707 06:54:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:28.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.707 --rc genhtml_branch_coverage=1 00:14:28.707 --rc genhtml_function_coverage=1 00:14:28.707 --rc genhtml_legend=1 00:14:28.707 --rc geninfo_all_blocks=1 00:14:28.707 --rc geninfo_unexecuted_blocks=1 00:14:28.707 00:14:28.707 ' 00:14:28.707 06:54:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:28.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.707 --rc genhtml_branch_coverage=1 00:14:28.707 --rc genhtml_function_coverage=1 00:14:28.707 --rc genhtml_legend=1 00:14:28.707 --rc geninfo_all_blocks=1 00:14:28.707 --rc geninfo_unexecuted_blocks=1 00:14:28.707 00:14:28.707 ' 00:14:28.707 06:54:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:28.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.708 --rc genhtml_branch_coverage=1 00:14:28.708 --rc genhtml_function_coverage=1 00:14:28.708 --rc genhtml_legend=1 00:14:28.708 --rc geninfo_all_blocks=1 00:14:28.708 --rc geninfo_unexecuted_blocks=1 00:14:28.708 00:14:28.708 ' 00:14:28.708 06:54:50 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.708 06:54:50 -- nvmf/common.sh@7 -- # uname -s 00:14:28.708 06:54:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.708 06:54:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.708 06:54:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.708 06:54:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.708 06:54:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.708 06:54:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.708 06:54:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.708 06:54:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.708 06:54:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.708 06:54:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.708 06:54:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:28.708 06:54:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:28.708 06:54:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.708 06:54:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.708 06:54:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.708 06:54:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:28.708 06:54:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.708 06:54:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.708 06:54:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.708 06:54:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.708 06:54:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.708 06:54:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.708 06:54:50 -- paths/export.sh@5 -- # export PATH 00:14:28.708 06:54:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.708 06:54:50 -- nvmf/common.sh@46 -- # : 0 00:14:28.708 06:54:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:28.708 06:54:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:28.708 06:54:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:28.708 06:54:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.708 06:54:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.708 06:54:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:28.708 06:54:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:28.708 06:54:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:28.708 06:54:50 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:28.708 06:54:50 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:28.708 06:54:50 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:28.708 06:54:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.708 06:54:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:28.708 06:54:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:28.708 06:54:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:28.708 06:54:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.708 06:54:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.708 06:54:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.708 06:54:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:28.708 06:54:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:28.708 06:54:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:28.708 06:54:50 -- common/autotest_common.sh@10 -- # set +x 00:14:35.272 06:54:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:35.272 06:54:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:35.272 06:54:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:35.272 06:54:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:35.272 06:54:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:35.272 06:54:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:35.272 06:54:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:35.272 06:54:56 -- nvmf/common.sh@294 -- # net_devs=() 00:14:35.272 06:54:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:35.272 06:54:56 -- nvmf/common.sh@295 -- # e810=() 00:14:35.272 06:54:56 -- nvmf/common.sh@295 -- # local -ga e810 00:14:35.272 06:54:56 -- nvmf/common.sh@296 -- # x722=() 00:14:35.272 06:54:56 -- nvmf/common.sh@296 -- # local -ga x722 00:14:35.272 06:54:56 -- nvmf/common.sh@297 -- # mlx=() 00:14:35.272 06:54:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:35.272 06:54:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.273 06:54:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.273 06:54:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.273 06:54:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.273 06:54:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.273 06:54:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.273 06:54:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.273 06:54:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.273 06:54:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.273 06:54:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.273 06:54:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.273 06:54:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:35.273 06:54:56 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:35.273 06:54:56 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:35.273 06:54:56 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:35.273 06:54:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:35.273 06:54:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:35.273 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:35.273 06:54:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:35.273 06:54:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:35.273 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:35.273 06:54:56 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:35.273 06:54:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:35.273 06:54:56 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.273 06:54:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:35.273 06:54:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.273 06:54:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:35.273 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:35.273 06:54:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.273 06:54:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.273 06:54:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:35.273 06:54:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.273 06:54:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:35.273 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:35.273 06:54:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.273 06:54:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:35.273 06:54:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:35.273 06:54:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:35.273 06:54:56 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:35.273 06:54:56 -- nvmf/common.sh@57 -- # uname 00:14:35.273 06:54:56 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:35.273 06:54:56 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:35.273 06:54:56 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:35.273 06:54:56 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:35.273 06:54:56 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:35.273 06:54:56 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:35.273 06:54:56 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:35.273 06:54:56 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:35.273 06:54:56 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:35.273 06:54:56 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:35.273 06:54:56 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:35.273 06:54:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:35.273 06:54:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:35.273 06:54:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:35.273 06:54:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:35.273 06:54:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:35.273 06:54:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:35.273 06:54:56 -- nvmf/common.sh@104 -- # continue 2 00:14:35.273 06:54:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:35.273 06:54:56 -- nvmf/common.sh@104 -- # continue 2 00:14:35.273 06:54:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:35.273 06:54:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:35.273 06:54:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:35.273 06:54:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:35.273 06:54:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:35.273 06:54:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:35.273 06:54:56 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:35.273 06:54:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:35.273 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:35.273 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:35.273 altname enp217s0f0np0 00:14:35.273 altname ens818f0np0 00:14:35.273 inet 192.168.100.8/24 scope global mlx_0_0 00:14:35.273 valid_lft forever preferred_lft forever 00:14:35.273 06:54:56 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:35.273 06:54:56 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:35.273 06:54:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:35.273 06:54:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:35.273 06:54:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:35.273 06:54:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:35.273 06:54:56 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:35.273 06:54:56 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:35.273 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:35.273 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:35.273 altname enp217s0f1np1 00:14:35.273 altname ens818f1np1 00:14:35.273 inet 192.168.100.9/24 scope global mlx_0_1 00:14:35.273 valid_lft forever preferred_lft forever 00:14:35.273 06:54:56 -- nvmf/common.sh@410 -- # return 0 00:14:35.273 06:54:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:35.273 06:54:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:35.273 06:54:56 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:35.273 06:54:56 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:35.273 06:54:56 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:35.273 06:54:56 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:35.273 06:54:56 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:35.273 06:54:56 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:35.273 06:54:56 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:35.273 06:54:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:35.273 06:54:56 -- nvmf/common.sh@104 -- # continue 2 00:14:35.273 06:54:56 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:35.273 06:54:56 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:35.273 06:54:56 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:35.273 06:54:56 -- nvmf/common.sh@104 -- # continue 2 00:14:35.273 06:54:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:35.273 06:54:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:35.273 06:54:56 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:35.273 06:54:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:35.273 06:54:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:35.273 06:54:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:35.273 06:54:56 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:35.273 06:54:56 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:35.273 06:54:56 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:35.273 06:54:56 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:35.273 06:54:56 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:35.273 06:54:56 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:35.273 06:54:56 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:35.273 192.168.100.9' 00:14:35.273 06:54:56 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:35.273 192.168.100.9' 00:14:35.273 06:54:56 -- nvmf/common.sh@445 -- # head -n 1 00:14:35.273 06:54:56 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:35.273 06:54:56 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:35.274 192.168.100.9' 00:14:35.274 06:54:56 -- nvmf/common.sh@446 -- # tail -n +2 00:14:35.274 06:54:56 -- nvmf/common.sh@446 -- # head -n 1 00:14:35.274 06:54:56 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:35.274 06:54:56 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:35.274 06:54:56 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:35.274 06:54:56 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:35.274 06:54:56 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:35.274 06:54:56 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:35.274 06:54:56 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:35.274 06:54:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:35.274 06:54:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:35.274 06:54:56 -- common/autotest_common.sh@10 -- # set +x 00:14:35.274 06:54:56 -- nvmf/common.sh@469 -- # nvmfpid=1288982 00:14:35.274 06:54:56 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:35.274 06:54:56 -- nvmf/common.sh@470 -- # waitforlisten 1288982 00:14:35.274 06:54:56 -- common/autotest_common.sh@829 -- # '[' -z 1288982 ']' 00:14:35.274 06:54:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.274 06:54:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.274 06:54:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.274 06:54:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.274 06:54:56 -- common/autotest_common.sh@10 -- # set +x 00:14:35.274 [2024-12-15 06:54:56.806337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:35.274 [2024-12-15 06:54:56.806387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.274 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.274 [2024-12-15 06:54:56.877437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.532 [2024-12-15 06:54:56.915723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:35.532 [2024-12-15 06:54:56.915844] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.532 [2024-12-15 06:54:56.915854] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.532 [2024-12-15 06:54:56.915863] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.532 [2024-12-15 06:54:56.915904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.532 [2024-12-15 06:54:56.916009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.532 [2024-12-15 06:54:56.916039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.532 [2024-12-15 06:54:56.916039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.099 06:54:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.099 06:54:57 -- common/autotest_common.sh@862 -- # return 0 00:14:36.099 06:54:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:36.099 06:54:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:36.099 06:54:57 -- common/autotest_common.sh@10 -- # set +x 00:14:36.099 06:54:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.099 06:54:57 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:36.099 06:54:57 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:36.099 06:54:57 -- target/multitarget.sh@21 -- # jq length 00:14:36.357 06:54:57 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:36.357 06:54:57 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:36.357 "nvmf_tgt_1" 00:14:36.357 06:54:57 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:36.357 "nvmf_tgt_2" 00:14:36.357 06:54:57 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:36.357 06:54:57 -- target/multitarget.sh@28 -- # jq length 00:14:36.616 06:54:58 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:36.616 06:54:58 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:36.616 true 00:14:36.616 06:54:58 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:36.874 true 00:14:36.874 06:54:58 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:36.874 06:54:58 -- target/multitarget.sh@35 -- # jq length 00:14:36.874 06:54:58 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:36.874 06:54:58 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:36.874 06:54:58 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:36.874 06:54:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:36.874 06:54:58 -- nvmf/common.sh@116 -- # sync 00:14:36.874 06:54:58 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:36.874 06:54:58 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:36.874 06:54:58 -- nvmf/common.sh@119 -- # set +e 00:14:36.874 06:54:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:36.874 06:54:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:36.874 rmmod nvme_rdma 00:14:36.874 rmmod nvme_fabrics 00:14:36.874 06:54:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:36.874 06:54:58 -- nvmf/common.sh@123 -- # set -e 00:14:36.874 06:54:58 -- nvmf/common.sh@124 -- # return 0 00:14:36.874 06:54:58 -- nvmf/common.sh@477 -- # '[' -n 1288982 ']' 00:14:36.874 06:54:58 -- nvmf/common.sh@478 -- # killprocess 1288982 00:14:36.874 06:54:58 -- common/autotest_common.sh@936 -- # '[' -z 1288982 ']' 00:14:36.874 06:54:58 -- common/autotest_common.sh@940 -- # kill -0 1288982 00:14:36.874 06:54:58 -- common/autotest_common.sh@941 -- # uname 00:14:36.874 06:54:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:36.874 06:54:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1288982 00:14:36.874 06:54:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:36.874 06:54:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:36.874 06:54:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1288982' 00:14:36.874 killing process with pid 1288982 00:14:36.874 06:54:58 -- common/autotest_common.sh@955 -- # kill 1288982 00:14:36.874 06:54:58 -- common/autotest_common.sh@960 -- # wait 1288982 00:14:37.133 06:54:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:37.133 06:54:58 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:37.133 00:14:37.133 real 0m8.584s 00:14:37.133 user 0m9.456s 00:14:37.133 sys 0m5.483s 00:14:37.133 06:54:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:37.133 06:54:58 -- common/autotest_common.sh@10 -- # set +x 00:14:37.133 ************************************ 00:14:37.133 END TEST nvmf_multitarget 00:14:37.133 ************************************ 00:14:37.133 06:54:58 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:37.133 06:54:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:37.133 06:54:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:37.133 06:54:58 -- common/autotest_common.sh@10 -- # set +x 00:14:37.133 ************************************ 00:14:37.133 START TEST nvmf_rpc 00:14:37.133 ************************************ 00:14:37.133 06:54:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:37.392 * Looking for test storage... 00:14:37.392 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:37.392 06:54:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:37.392 06:54:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:37.392 06:54:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:37.392 06:54:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:37.392 06:54:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:37.392 06:54:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:37.392 06:54:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:37.392 06:54:58 -- scripts/common.sh@335 -- # IFS=.-: 00:14:37.392 06:54:58 -- scripts/common.sh@335 -- # read -ra ver1 00:14:37.392 06:54:58 -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.392 06:54:58 -- scripts/common.sh@336 -- # read -ra ver2 00:14:37.392 06:54:58 -- scripts/common.sh@337 -- # local 'op=<' 00:14:37.392 06:54:58 -- scripts/common.sh@339 -- # ver1_l=2 00:14:37.392 06:54:58 -- scripts/common.sh@340 -- # ver2_l=1 00:14:37.392 06:54:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:37.392 06:54:58 -- scripts/common.sh@343 -- # case "$op" in 00:14:37.392 06:54:58 -- scripts/common.sh@344 -- # : 1 00:14:37.392 06:54:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:37.392 06:54:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.392 06:54:58 -- scripts/common.sh@364 -- # decimal 1 00:14:37.392 06:54:58 -- scripts/common.sh@352 -- # local d=1 00:14:37.392 06:54:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.392 06:54:58 -- scripts/common.sh@354 -- # echo 1 00:14:37.392 06:54:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:37.392 06:54:58 -- scripts/common.sh@365 -- # decimal 2 00:14:37.392 06:54:58 -- scripts/common.sh@352 -- # local d=2 00:14:37.392 06:54:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.392 06:54:58 -- scripts/common.sh@354 -- # echo 2 00:14:37.392 06:54:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:37.392 06:54:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:37.392 06:54:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:37.392 06:54:58 -- scripts/common.sh@367 -- # return 0 00:14:37.392 06:54:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.392 06:54:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:37.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.392 --rc genhtml_branch_coverage=1 00:14:37.392 --rc genhtml_function_coverage=1 00:14:37.392 --rc genhtml_legend=1 00:14:37.392 --rc geninfo_all_blocks=1 00:14:37.392 --rc geninfo_unexecuted_blocks=1 00:14:37.392 00:14:37.392 ' 00:14:37.392 06:54:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:37.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.392 --rc genhtml_branch_coverage=1 00:14:37.392 --rc genhtml_function_coverage=1 00:14:37.392 --rc genhtml_legend=1 00:14:37.392 --rc geninfo_all_blocks=1 00:14:37.392 --rc geninfo_unexecuted_blocks=1 00:14:37.392 00:14:37.392 ' 00:14:37.392 06:54:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:37.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.392 --rc genhtml_branch_coverage=1 00:14:37.392 --rc genhtml_function_coverage=1 00:14:37.392 --rc genhtml_legend=1 00:14:37.392 --rc geninfo_all_blocks=1 00:14:37.392 --rc geninfo_unexecuted_blocks=1 00:14:37.392 00:14:37.392 ' 00:14:37.392 06:54:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:37.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.392 --rc genhtml_branch_coverage=1 00:14:37.392 --rc genhtml_function_coverage=1 00:14:37.392 --rc genhtml_legend=1 00:14:37.393 --rc geninfo_all_blocks=1 00:14:37.393 --rc geninfo_unexecuted_blocks=1 00:14:37.393 00:14:37.393 ' 00:14:37.393 06:54:58 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.393 06:54:58 -- nvmf/common.sh@7 -- # uname -s 00:14:37.393 06:54:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.393 06:54:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.393 06:54:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.393 06:54:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.393 06:54:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.393 06:54:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.393 06:54:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.393 06:54:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.393 06:54:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.393 06:54:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.393 06:54:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:37.393 06:54:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:37.393 06:54:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.393 06:54:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.393 06:54:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.393 06:54:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:37.393 06:54:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.393 06:54:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.393 06:54:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.393 06:54:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.393 06:54:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.393 06:54:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.393 06:54:58 -- paths/export.sh@5 -- # export PATH 00:14:37.393 06:54:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.393 06:54:58 -- nvmf/common.sh@46 -- # : 0 00:14:37.393 06:54:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:37.393 06:54:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:37.393 06:54:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:37.393 06:54:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.393 06:54:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.393 06:54:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:37.393 06:54:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:37.393 06:54:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:37.393 06:54:58 -- target/rpc.sh@11 -- # loops=5 00:14:37.393 06:54:58 -- target/rpc.sh@23 -- # nvmftestinit 00:14:37.393 06:54:58 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:37.393 06:54:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.393 06:54:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:37.393 06:54:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:37.393 06:54:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:37.393 06:54:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.393 06:54:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.393 06:54:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.393 06:54:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:37.393 06:54:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:37.393 06:54:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:37.393 06:54:58 -- common/autotest_common.sh@10 -- # set +x 00:14:43.961 06:55:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:43.961 06:55:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:43.961 06:55:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:43.961 06:55:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:43.961 06:55:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:43.961 06:55:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:43.961 06:55:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:43.961 06:55:05 -- nvmf/common.sh@294 -- # net_devs=() 00:14:43.961 06:55:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:43.961 06:55:05 -- nvmf/common.sh@295 -- # e810=() 00:14:43.961 06:55:05 -- nvmf/common.sh@295 -- # local -ga e810 00:14:43.961 06:55:05 -- nvmf/common.sh@296 -- # x722=() 00:14:43.961 06:55:05 -- nvmf/common.sh@296 -- # local -ga x722 00:14:43.961 06:55:05 -- nvmf/common.sh@297 -- # mlx=() 00:14:43.961 06:55:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:43.961 06:55:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.961 06:55:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.961 06:55:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.961 06:55:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.961 06:55:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.961 06:55:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.961 06:55:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.961 06:55:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.961 06:55:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.961 06:55:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.961 06:55:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.961 06:55:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:43.961 06:55:05 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:43.961 06:55:05 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:43.961 06:55:05 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:43.961 06:55:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:43.961 06:55:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:43.961 06:55:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:43.961 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:43.961 06:55:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:43.961 06:55:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:43.961 06:55:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:43.961 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:43.961 06:55:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:43.961 06:55:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:43.961 06:55:05 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:43.961 06:55:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.961 06:55:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:43.961 06:55:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.961 06:55:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:43.961 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:43.961 06:55:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.961 06:55:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:43.961 06:55:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.961 06:55:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:43.961 06:55:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.961 06:55:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:43.961 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:43.961 06:55:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.961 06:55:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:43.961 06:55:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:43.961 06:55:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:43.961 06:55:05 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:43.961 06:55:05 -- nvmf/common.sh@57 -- # uname 00:14:43.961 06:55:05 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:43.961 06:55:05 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:43.961 06:55:05 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:43.961 06:55:05 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:43.961 06:55:05 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:43.961 06:55:05 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:43.961 06:55:05 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:43.961 06:55:05 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:43.961 06:55:05 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:43.961 06:55:05 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:43.961 06:55:05 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:43.961 06:55:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:43.961 06:55:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:43.961 06:55:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:43.961 06:55:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:43.961 06:55:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:43.961 06:55:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:43.961 06:55:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:43.961 06:55:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:43.961 06:55:05 -- nvmf/common.sh@104 -- # continue 2 00:14:43.961 06:55:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:43.961 06:55:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:43.961 06:55:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:43.961 06:55:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:43.961 06:55:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:43.961 06:55:05 -- nvmf/common.sh@104 -- # continue 2 00:14:43.961 06:55:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:43.961 06:55:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:43.961 06:55:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:43.961 06:55:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:43.961 06:55:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:43.961 06:55:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:43.961 06:55:05 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:43.962 06:55:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:43.962 06:55:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:43.962 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:43.962 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:43.962 altname enp217s0f0np0 00:14:43.962 altname ens818f0np0 00:14:43.962 inet 192.168.100.8/24 scope global mlx_0_0 00:14:43.962 valid_lft forever preferred_lft forever 00:14:43.962 06:55:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:43.962 06:55:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:43.962 06:55:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:43.962 06:55:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:43.962 06:55:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:43.962 06:55:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:43.962 06:55:05 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:43.962 06:55:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:43.962 06:55:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:43.962 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:43.962 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:43.962 altname enp217s0f1np1 00:14:43.962 altname ens818f1np1 00:14:43.962 inet 192.168.100.9/24 scope global mlx_0_1 00:14:43.962 valid_lft forever preferred_lft forever 00:14:43.962 06:55:05 -- nvmf/common.sh@410 -- # return 0 00:14:43.962 06:55:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:43.962 06:55:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:43.962 06:55:05 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:43.962 06:55:05 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:43.962 06:55:05 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:43.962 06:55:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:43.962 06:55:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:43.962 06:55:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:43.962 06:55:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:43.962 06:55:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:43.962 06:55:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:43.962 06:55:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:43.962 06:55:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:43.962 06:55:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:43.962 06:55:05 -- nvmf/common.sh@104 -- # continue 2 00:14:43.962 06:55:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:43.962 06:55:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:43.962 06:55:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:43.962 06:55:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:43.962 06:55:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:43.962 06:55:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:43.962 06:55:05 -- nvmf/common.sh@104 -- # continue 2 00:14:43.962 06:55:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:43.962 06:55:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:43.962 06:55:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:43.962 06:55:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:43.962 06:55:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:43.962 06:55:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:43.962 06:55:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:43.962 06:55:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:43.962 06:55:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:43.962 06:55:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:43.962 06:55:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:43.962 06:55:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:43.962 06:55:05 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:43.962 192.168.100.9' 00:14:43.962 06:55:05 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:43.962 192.168.100.9' 00:14:43.962 06:55:05 -- nvmf/common.sh@445 -- # head -n 1 00:14:43.962 06:55:05 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:43.962 06:55:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:43.962 192.168.100.9' 00:14:43.962 06:55:05 -- nvmf/common.sh@446 -- # tail -n +2 00:14:43.962 06:55:05 -- nvmf/common.sh@446 -- # head -n 1 00:14:43.962 06:55:05 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:43.962 06:55:05 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:43.962 06:55:05 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:43.962 06:55:05 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:43.962 06:55:05 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:43.962 06:55:05 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:43.962 06:55:05 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:43.962 06:55:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:43.962 06:55:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:43.962 06:55:05 -- common/autotest_common.sh@10 -- # set +x 00:14:43.962 06:55:05 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:43.962 06:55:05 -- nvmf/common.sh@469 -- # nvmfpid=1292732 00:14:43.962 06:55:05 -- nvmf/common.sh@470 -- # waitforlisten 1292732 00:14:43.962 06:55:05 -- common/autotest_common.sh@829 -- # '[' -z 1292732 ']' 00:14:43.962 06:55:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.962 06:55:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.962 06:55:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.962 06:55:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.962 06:55:05 -- common/autotest_common.sh@10 -- # set +x 00:14:43.962 [2024-12-15 06:55:05.521299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:43.962 [2024-12-15 06:55:05.521347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.962 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.962 [2024-12-15 06:55:05.589762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.221 [2024-12-15 06:55:05.628465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:44.221 [2024-12-15 06:55:05.628572] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.221 [2024-12-15 06:55:05.628582] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.221 [2024-12-15 06:55:05.628591] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.221 [2024-12-15 06:55:05.628640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.221 [2024-12-15 06:55:05.628752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.221 [2024-12-15 06:55:05.628835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.221 [2024-12-15 06:55:05.628836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.789 06:55:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.789 06:55:06 -- common/autotest_common.sh@862 -- # return 0 00:14:44.789 06:55:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:44.789 06:55:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:44.789 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:44.789 06:55:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.789 06:55:06 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:44.789 06:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.789 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:45.048 06:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.048 06:55:06 -- target/rpc.sh@26 -- # stats='{ 00:14:45.049 "tick_rate": 2500000000, 00:14:45.049 "poll_groups": [ 00:14:45.049 { 00:14:45.049 "name": "nvmf_tgt_poll_group_0", 00:14:45.049 "admin_qpairs": 0, 00:14:45.049 "io_qpairs": 0, 00:14:45.049 "current_admin_qpairs": 0, 00:14:45.049 "current_io_qpairs": 0, 00:14:45.049 "pending_bdev_io": 0, 00:14:45.049 "completed_nvme_io": 0, 00:14:45.049 "transports": [] 00:14:45.049 }, 00:14:45.049 { 00:14:45.049 "name": "nvmf_tgt_poll_group_1", 00:14:45.049 "admin_qpairs": 0, 00:14:45.049 "io_qpairs": 0, 00:14:45.049 "current_admin_qpairs": 0, 00:14:45.049 "current_io_qpairs": 0, 00:14:45.049 "pending_bdev_io": 0, 00:14:45.049 "completed_nvme_io": 0, 00:14:45.049 "transports": [] 00:14:45.049 }, 00:14:45.049 { 00:14:45.049 "name": "nvmf_tgt_poll_group_2", 00:14:45.049 "admin_qpairs": 0, 00:14:45.049 "io_qpairs": 0, 00:14:45.049 "current_admin_qpairs": 0, 00:14:45.049 "current_io_qpairs": 0, 00:14:45.049 "pending_bdev_io": 0, 00:14:45.049 "completed_nvme_io": 0, 00:14:45.049 "transports": [] 00:14:45.049 }, 00:14:45.049 { 00:14:45.049 "name": "nvmf_tgt_poll_group_3", 00:14:45.049 "admin_qpairs": 0, 00:14:45.049 "io_qpairs": 0, 00:14:45.049 "current_admin_qpairs": 0, 00:14:45.049 "current_io_qpairs": 0, 00:14:45.049 "pending_bdev_io": 0, 00:14:45.049 "completed_nvme_io": 0, 00:14:45.049 "transports": [] 00:14:45.049 } 00:14:45.049 ] 00:14:45.049 }' 00:14:45.049 06:55:06 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:45.049 06:55:06 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:45.049 06:55:06 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:45.049 06:55:06 -- target/rpc.sh@15 -- # wc -l 00:14:45.049 06:55:06 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:45.049 06:55:06 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:45.049 06:55:06 -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:45.049 06:55:06 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:45.049 06:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.049 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:45.049 [2024-12-15 06:55:06.557147] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17a7130/0x17ab600) succeed. 00:14:45.049 [2024-12-15 06:55:06.566292] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17a86d0/0x17ecca0) succeed. 00:14:45.049 06:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.049 06:55:06 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:45.049 06:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.049 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:45.308 06:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.308 06:55:06 -- target/rpc.sh@33 -- # stats='{ 00:14:45.308 "tick_rate": 2500000000, 00:14:45.308 "poll_groups": [ 00:14:45.308 { 00:14:45.308 "name": "nvmf_tgt_poll_group_0", 00:14:45.308 "admin_qpairs": 0, 00:14:45.308 "io_qpairs": 0, 00:14:45.308 "current_admin_qpairs": 0, 00:14:45.308 "current_io_qpairs": 0, 00:14:45.308 "pending_bdev_io": 0, 00:14:45.308 "completed_nvme_io": 0, 00:14:45.308 "transports": [ 00:14:45.308 { 00:14:45.308 "trtype": "RDMA", 00:14:45.308 "pending_data_buffer": 0, 00:14:45.308 "devices": [ 00:14:45.308 { 00:14:45.308 "name": "mlx5_0", 00:14:45.308 "polls": 16038, 00:14:45.308 "idle_polls": 16038, 00:14:45.308 "completions": 0, 00:14:45.308 "requests": 0, 00:14:45.308 "request_latency": 0, 00:14:45.308 "pending_free_request": 0, 00:14:45.308 "pending_rdma_read": 0, 00:14:45.308 "pending_rdma_write": 0, 00:14:45.308 "pending_rdma_send": 0, 00:14:45.308 "total_send_wrs": 0, 00:14:45.308 "send_doorbell_updates": 0, 00:14:45.308 "total_recv_wrs": 4096, 00:14:45.308 "recv_doorbell_updates": 1 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "name": "mlx5_1", 00:14:45.308 "polls": 16038, 00:14:45.308 "idle_polls": 16038, 00:14:45.308 "completions": 0, 00:14:45.308 "requests": 0, 00:14:45.308 "request_latency": 0, 00:14:45.308 "pending_free_request": 0, 00:14:45.308 "pending_rdma_read": 0, 00:14:45.308 "pending_rdma_write": 0, 00:14:45.308 "pending_rdma_send": 0, 00:14:45.308 "total_send_wrs": 0, 00:14:45.308 "send_doorbell_updates": 0, 00:14:45.308 "total_recv_wrs": 4096, 00:14:45.308 "recv_doorbell_updates": 1 00:14:45.308 } 00:14:45.308 ] 00:14:45.308 } 00:14:45.308 ] 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "name": "nvmf_tgt_poll_group_1", 00:14:45.308 "admin_qpairs": 0, 00:14:45.308 "io_qpairs": 0, 00:14:45.308 "current_admin_qpairs": 0, 00:14:45.308 "current_io_qpairs": 0, 00:14:45.308 "pending_bdev_io": 0, 00:14:45.308 "completed_nvme_io": 0, 00:14:45.308 "transports": [ 00:14:45.308 { 00:14:45.309 "trtype": "RDMA", 00:14:45.309 "pending_data_buffer": 0, 00:14:45.309 "devices": [ 00:14:45.309 { 00:14:45.309 "name": "mlx5_0", 00:14:45.309 "polls": 10274, 00:14:45.309 "idle_polls": 10274, 00:14:45.309 "completions": 0, 00:14:45.309 "requests": 0, 00:14:45.309 "request_latency": 0, 00:14:45.309 "pending_free_request": 0, 00:14:45.309 "pending_rdma_read": 0, 00:14:45.309 "pending_rdma_write": 0, 00:14:45.309 "pending_rdma_send": 0, 00:14:45.309 "total_send_wrs": 0, 00:14:45.309 "send_doorbell_updates": 0, 00:14:45.309 "total_recv_wrs": 4096, 00:14:45.309 "recv_doorbell_updates": 1 00:14:45.309 }, 00:14:45.309 { 00:14:45.309 "name": "mlx5_1", 00:14:45.309 "polls": 10274, 00:14:45.309 "idle_polls": 10274, 00:14:45.309 "completions": 0, 00:14:45.309 "requests": 0, 00:14:45.309 "request_latency": 0, 00:14:45.309 "pending_free_request": 0, 00:14:45.309 "pending_rdma_read": 0, 00:14:45.309 "pending_rdma_write": 0, 00:14:45.309 "pending_rdma_send": 0, 00:14:45.309 "total_send_wrs": 0, 00:14:45.309 "send_doorbell_updates": 0, 00:14:45.309 "total_recv_wrs": 4096, 00:14:45.309 "recv_doorbell_updates": 1 00:14:45.309 } 00:14:45.309 ] 00:14:45.309 } 00:14:45.309 ] 00:14:45.309 }, 00:14:45.309 { 00:14:45.309 "name": "nvmf_tgt_poll_group_2", 00:14:45.309 "admin_qpairs": 0, 00:14:45.309 "io_qpairs": 0, 00:14:45.309 "current_admin_qpairs": 0, 00:14:45.309 "current_io_qpairs": 0, 00:14:45.309 "pending_bdev_io": 0, 00:14:45.309 "completed_nvme_io": 0, 00:14:45.309 "transports": [ 00:14:45.309 { 00:14:45.309 "trtype": "RDMA", 00:14:45.309 "pending_data_buffer": 0, 00:14:45.309 "devices": [ 00:14:45.309 { 00:14:45.309 "name": "mlx5_0", 00:14:45.309 "polls": 5667, 00:14:45.309 "idle_polls": 5667, 00:14:45.309 "completions": 0, 00:14:45.309 "requests": 0, 00:14:45.309 "request_latency": 0, 00:14:45.309 "pending_free_request": 0, 00:14:45.309 "pending_rdma_read": 0, 00:14:45.309 "pending_rdma_write": 0, 00:14:45.309 "pending_rdma_send": 0, 00:14:45.309 "total_send_wrs": 0, 00:14:45.309 "send_doorbell_updates": 0, 00:14:45.309 "total_recv_wrs": 4096, 00:14:45.309 "recv_doorbell_updates": 1 00:14:45.309 }, 00:14:45.309 { 00:14:45.309 "name": "mlx5_1", 00:14:45.309 "polls": 5667, 00:14:45.309 "idle_polls": 5667, 00:14:45.309 "completions": 0, 00:14:45.309 "requests": 0, 00:14:45.309 "request_latency": 0, 00:14:45.309 "pending_free_request": 0, 00:14:45.309 "pending_rdma_read": 0, 00:14:45.309 "pending_rdma_write": 0, 00:14:45.309 "pending_rdma_send": 0, 00:14:45.309 "total_send_wrs": 0, 00:14:45.309 "send_doorbell_updates": 0, 00:14:45.309 "total_recv_wrs": 4096, 00:14:45.309 "recv_doorbell_updates": 1 00:14:45.309 } 00:14:45.309 ] 00:14:45.309 } 00:14:45.309 ] 00:14:45.309 }, 00:14:45.309 { 00:14:45.309 "name": "nvmf_tgt_poll_group_3", 00:14:45.309 "admin_qpairs": 0, 00:14:45.309 "io_qpairs": 0, 00:14:45.309 "current_admin_qpairs": 0, 00:14:45.309 "current_io_qpairs": 0, 00:14:45.309 "pending_bdev_io": 0, 00:14:45.309 "completed_nvme_io": 0, 00:14:45.309 "transports": [ 00:14:45.309 { 00:14:45.309 "trtype": "RDMA", 00:14:45.309 "pending_data_buffer": 0, 00:14:45.309 "devices": [ 00:14:45.309 { 00:14:45.309 "name": "mlx5_0", 00:14:45.309 "polls": 922, 00:14:45.309 "idle_polls": 922, 00:14:45.309 "completions": 0, 00:14:45.309 "requests": 0, 00:14:45.309 "request_latency": 0, 00:14:45.309 "pending_free_request": 0, 00:14:45.309 "pending_rdma_read": 0, 00:14:45.309 "pending_rdma_write": 0, 00:14:45.309 "pending_rdma_send": 0, 00:14:45.309 "total_send_wrs": 0, 00:14:45.309 "send_doorbell_updates": 0, 00:14:45.309 "total_recv_wrs": 4096, 00:14:45.309 "recv_doorbell_updates": 1 00:14:45.309 }, 00:14:45.309 { 00:14:45.309 "name": "mlx5_1", 00:14:45.309 "polls": 922, 00:14:45.309 "idle_polls": 922, 00:14:45.309 "completions": 0, 00:14:45.309 "requests": 0, 00:14:45.309 "request_latency": 0, 00:14:45.309 "pending_free_request": 0, 00:14:45.309 "pending_rdma_read": 0, 00:14:45.309 "pending_rdma_write": 0, 00:14:45.309 "pending_rdma_send": 0, 00:14:45.309 "total_send_wrs": 0, 00:14:45.309 "send_doorbell_updates": 0, 00:14:45.309 "total_recv_wrs": 4096, 00:14:45.309 "recv_doorbell_updates": 1 00:14:45.309 } 00:14:45.309 ] 00:14:45.309 } 00:14:45.309 ] 00:14:45.309 } 00:14:45.309 ] 00:14:45.309 }' 00:14:45.309 06:55:06 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:45.309 06:55:06 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:45.309 06:55:06 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:45.309 06:55:06 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:45.309 06:55:06 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:45.309 06:55:06 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:45.309 06:55:06 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:45.309 06:55:06 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:45.309 06:55:06 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:45.309 06:55:06 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:45.309 06:55:06 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:45.309 06:55:06 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:45.309 06:55:06 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:45.309 06:55:06 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:45.309 06:55:06 -- target/rpc.sh@15 -- # wc -l 00:14:45.309 06:55:06 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:45.309 06:55:06 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:45.309 06:55:06 -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:45.309 06:55:06 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:45.309 06:55:06 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:45.309 06:55:06 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:45.309 06:55:06 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:45.309 06:55:06 -- target/rpc.sh@15 -- # wc -l 00:14:45.309 06:55:06 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:45.309 06:55:06 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:45.309 06:55:06 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:45.309 06:55:06 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:45.309 06:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.309 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:45.569 Malloc1 00:14:45.569 06:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.569 06:55:06 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:45.569 06:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.569 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:45.569 06:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.569 06:55:06 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.569 06:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.569 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:45.569 06:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.569 06:55:06 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:45.569 06:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.569 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:45.569 06:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.569 06:55:06 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:45.569 06:55:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.569 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:45.569 [2024-12-15 06:55:06.992953] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:45.569 06:55:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.569 06:55:06 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:45.569 06:55:06 -- common/autotest_common.sh@650 -- # local es=0 00:14:45.569 06:55:06 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:45.569 06:55:06 -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:45.569 06:55:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.569 06:55:06 -- common/autotest_common.sh@642 -- # type -t nvme 00:14:45.569 06:55:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.569 06:55:07 -- common/autotest_common.sh@644 -- # type -P nvme 00:14:45.569 06:55:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.569 06:55:07 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:45.569 06:55:07 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:45.569 06:55:07 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:45.569 [2024-12-15 06:55:07.038846] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:45.569 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:45.569 could not add new controller: failed to write to nvme-fabrics device 00:14:45.569 06:55:07 -- common/autotest_common.sh@653 -- # es=1 00:14:45.569 06:55:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.569 06:55:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.569 06:55:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.569 06:55:07 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:45.569 06:55:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.569 06:55:07 -- common/autotest_common.sh@10 -- # set +x 00:14:45.569 06:55:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.569 06:55:07 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:46.507 06:55:08 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:46.507 06:55:08 -- common/autotest_common.sh@1187 -- # local i=0 00:14:46.507 06:55:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.507 06:55:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:46.507 06:55:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:49.042 06:55:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:49.042 06:55:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:49.042 06:55:10 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:49.042 06:55:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:49.042 06:55:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.042 06:55:10 -- common/autotest_common.sh@1197 -- # return 0 00:14:49.042 06:55:10 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.610 06:55:11 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:49.610 06:55:11 -- common/autotest_common.sh@1208 -- # local i=0 00:14:49.610 06:55:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:49.610 06:55:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.610 06:55:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:49.610 06:55:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.610 06:55:11 -- common/autotest_common.sh@1220 -- # return 0 00:14:49.610 06:55:11 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:49.610 06:55:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.610 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:14:49.610 06:55:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.610 06:55:11 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:49.610 06:55:11 -- common/autotest_common.sh@650 -- # local es=0 00:14:49.610 06:55:11 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:49.610 06:55:11 -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:49.610 06:55:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.610 06:55:11 -- common/autotest_common.sh@642 -- # type -t nvme 00:14:49.610 06:55:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.610 06:55:11 -- common/autotest_common.sh@644 -- # type -P nvme 00:14:49.610 06:55:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.610 06:55:11 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:49.610 06:55:11 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:49.610 06:55:11 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:49.610 [2024-12-15 06:55:11.160981] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:49.610 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:49.610 could not add new controller: failed to write to nvme-fabrics device 00:14:49.610 06:55:11 -- common/autotest_common.sh@653 -- # es=1 00:14:49.610 06:55:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:49.610 06:55:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:49.610 06:55:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:49.610 06:55:11 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:49.610 06:55:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.610 06:55:11 -- common/autotest_common.sh@10 -- # set +x 00:14:49.610 06:55:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.610 06:55:11 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:50.987 06:55:12 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:50.987 06:55:12 -- common/autotest_common.sh@1187 -- # local i=0 00:14:50.987 06:55:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.987 06:55:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:50.987 06:55:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:52.893 06:55:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:52.894 06:55:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:52.894 06:55:14 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.894 06:55:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:52.894 06:55:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.894 06:55:14 -- common/autotest_common.sh@1197 -- # return 0 00:14:52.894 06:55:14 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.830 06:55:15 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:53.830 06:55:15 -- common/autotest_common.sh@1208 -- # local i=0 00:14:53.830 06:55:15 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:53.830 06:55:15 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.830 06:55:15 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:53.830 06:55:15 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.830 06:55:15 -- common/autotest_common.sh@1220 -- # return 0 00:14:53.830 06:55:15 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.830 06:55:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.830 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 06:55:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.830 06:55:15 -- target/rpc.sh@81 -- # seq 1 5 00:14:53.830 06:55:15 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:53.830 06:55:15 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:53.830 06:55:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.830 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 06:55:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.830 06:55:15 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:53.830 06:55:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.830 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 [2024-12-15 06:55:15.250951] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:53.830 06:55:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.830 06:55:15 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:53.830 06:55:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.830 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 06:55:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.830 06:55:15 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:53.830 06:55:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.830 06:55:15 -- common/autotest_common.sh@10 -- # set +x 00:14:53.830 06:55:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.830 06:55:15 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:54.766 06:55:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:54.766 06:55:16 -- common/autotest_common.sh@1187 -- # local i=0 00:14:54.766 06:55:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:54.766 06:55:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:54.766 06:55:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:56.784 06:55:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:56.784 06:55:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:56.784 06:55:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.784 06:55:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:56.784 06:55:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.784 06:55:18 -- common/autotest_common.sh@1197 -- # return 0 00:14:56.784 06:55:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:57.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.721 06:55:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:57.721 06:55:19 -- common/autotest_common.sh@1208 -- # local i=0 00:14:57.721 06:55:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:57.721 06:55:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:57.721 06:55:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:57.721 06:55:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:57.721 06:55:19 -- common/autotest_common.sh@1220 -- # return 0 00:14:57.721 06:55:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:57.721 06:55:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.721 06:55:19 -- common/autotest_common.sh@10 -- # set +x 00:14:57.721 06:55:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.721 06:55:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.721 06:55:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.721 06:55:19 -- common/autotest_common.sh@10 -- # set +x 00:14:57.721 06:55:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.721 06:55:19 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:57.721 06:55:19 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:57.721 06:55:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.721 06:55:19 -- common/autotest_common.sh@10 -- # set +x 00:14:57.722 06:55:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.722 06:55:19 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:57.722 06:55:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.722 06:55:19 -- common/autotest_common.sh@10 -- # set +x 00:14:57.722 [2024-12-15 06:55:19.286669] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:57.722 06:55:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.722 06:55:19 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:57.722 06:55:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.722 06:55:19 -- common/autotest_common.sh@10 -- # set +x 00:14:57.722 06:55:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.722 06:55:19 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:57.722 06:55:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.722 06:55:19 -- common/autotest_common.sh@10 -- # set +x 00:14:57.722 06:55:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.722 06:55:19 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:58.658 06:55:20 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:58.658 06:55:20 -- common/autotest_common.sh@1187 -- # local i=0 00:14:58.658 06:55:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.658 06:55:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:58.658 06:55:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:01.193 06:55:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:01.193 06:55:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:01.193 06:55:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.193 06:55:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:01.193 06:55:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.193 06:55:22 -- common/autotest_common.sh@1197 -- # return 0 00:15:01.193 06:55:22 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.762 06:55:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:01.762 06:55:23 -- common/autotest_common.sh@1208 -- # local i=0 00:15:01.762 06:55:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.762 06:55:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:01.762 06:55:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.762 06:55:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:01.762 06:55:23 -- common/autotest_common.sh@1220 -- # return 0 00:15:01.762 06:55:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:01.762 06:55:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.762 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:15:01.762 06:55:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.762 06:55:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.762 06:55:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.762 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:15:01.762 06:55:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.762 06:55:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:01.762 06:55:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:01.762 06:55:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.762 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:15:01.762 06:55:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.762 06:55:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:01.762 06:55:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.762 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:15:01.762 [2024-12-15 06:55:23.360531] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:01.762 06:55:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.762 06:55:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:01.762 06:55:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.762 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:15:01.763 06:55:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.763 06:55:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:01.763 06:55:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.763 06:55:23 -- common/autotest_common.sh@10 -- # set +x 00:15:01.763 06:55:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.763 06:55:23 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:03.141 06:55:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:03.141 06:55:24 -- common/autotest_common.sh@1187 -- # local i=0 00:15:03.141 06:55:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.141 06:55:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:03.141 06:55:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:05.045 06:55:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:05.045 06:55:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:05.045 06:55:26 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.045 06:55:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:05.045 06:55:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.045 06:55:26 -- common/autotest_common.sh@1197 -- # return 0 00:15:05.045 06:55:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.983 06:55:27 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.983 06:55:27 -- common/autotest_common.sh@1208 -- # local i=0 00:15:05.983 06:55:27 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:05.983 06:55:27 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.983 06:55:27 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:05.983 06:55:27 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.983 06:55:27 -- common/autotest_common.sh@1220 -- # return 0 00:15:05.983 06:55:27 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:05.983 06:55:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.983 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:15:05.983 06:55:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.983 06:55:27 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.983 06:55:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.983 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:15:05.983 06:55:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.983 06:55:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:05.983 06:55:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:05.983 06:55:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.983 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:15:05.983 06:55:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.983 06:55:27 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:05.983 06:55:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.983 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:15:05.983 [2024-12-15 06:55:27.417686] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:05.983 06:55:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.983 06:55:27 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:05.983 06:55:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.983 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:15:05.983 06:55:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.983 06:55:27 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:05.983 06:55:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.983 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:15:05.983 06:55:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.983 06:55:27 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:06.920 06:55:28 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:06.920 06:55:28 -- common/autotest_common.sh@1187 -- # local i=0 00:15:06.920 06:55:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.920 06:55:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:06.920 06:55:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:08.825 06:55:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:08.825 06:55:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:08.825 06:55:30 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.825 06:55:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:08.825 06:55:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.825 06:55:30 -- common/autotest_common.sh@1197 -- # return 0 00:15:08.825 06:55:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.762 06:55:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:09.762 06:55:31 -- common/autotest_common.sh@1208 -- # local i=0 00:15:09.762 06:55:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:09.762 06:55:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.021 06:55:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:10.021 06:55:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.021 06:55:31 -- common/autotest_common.sh@1220 -- # return 0 00:15:10.021 06:55:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:10.021 06:55:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.021 06:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:10.021 06:55:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.021 06:55:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.021 06:55:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.021 06:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:10.021 06:55:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.021 06:55:31 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:10.021 06:55:31 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:10.021 06:55:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.021 06:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:10.021 06:55:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.021 06:55:31 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:10.021 06:55:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.021 06:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:10.021 [2024-12-15 06:55:31.456130] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:10.021 06:55:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.021 06:55:31 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:10.021 06:55:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.021 06:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:10.021 06:55:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.021 06:55:31 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:10.021 06:55:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.021 06:55:31 -- common/autotest_common.sh@10 -- # set +x 00:15:10.021 06:55:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.021 06:55:31 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:10.957 06:55:32 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:10.957 06:55:32 -- common/autotest_common.sh@1187 -- # local i=0 00:15:10.957 06:55:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:10.957 06:55:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:10.957 06:55:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:12.861 06:55:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:12.861 06:55:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:12.861 06:55:34 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:12.861 06:55:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:12.861 06:55:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:12.861 06:55:34 -- common/autotest_common.sh@1197 -- # return 0 00:15:12.861 06:55:34 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.237 06:55:35 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:14.237 06:55:35 -- common/autotest_common.sh@1208 -- # local i=0 00:15:14.237 06:55:35 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:14.237 06:55:35 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.237 06:55:35 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:14.237 06:55:35 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.237 06:55:35 -- common/autotest_common.sh@1220 -- # return 0 00:15:14.237 06:55:35 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@99 -- # seq 1 5 00:15:14.237 06:55:35 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:14.237 06:55:35 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 [2024-12-15 06:55:35.523721] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:14.237 06:55:35 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 [2024-12-15 06:55:35.571907] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:14.237 06:55:35 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 [2024-12-15 06:55:35.620067] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:14.237 06:55:35 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 [2024-12-15 06:55:35.668215] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.237 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.237 06:55:35 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:14.237 06:55:35 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:14.237 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.237 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.238 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.238 06:55:35 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:14.238 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.238 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.238 [2024-12-15 06:55:35.716405] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:14.238 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.238 06:55:35 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:14.238 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.238 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.238 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.238 06:55:35 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:14.238 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.238 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.238 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.238 06:55:35 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.238 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.238 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.238 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.238 06:55:35 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.238 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.238 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.238 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.238 06:55:35 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:14.238 06:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.238 06:55:35 -- common/autotest_common.sh@10 -- # set +x 00:15:14.238 06:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.238 06:55:35 -- target/rpc.sh@110 -- # stats='{ 00:15:14.238 "tick_rate": 2500000000, 00:15:14.238 "poll_groups": [ 00:15:14.238 { 00:15:14.238 "name": "nvmf_tgt_poll_group_0", 00:15:14.238 "admin_qpairs": 2, 00:15:14.238 "io_qpairs": 27, 00:15:14.238 "current_admin_qpairs": 0, 00:15:14.238 "current_io_qpairs": 0, 00:15:14.238 "pending_bdev_io": 0, 00:15:14.238 "completed_nvme_io": 152, 00:15:14.238 "transports": [ 00:15:14.238 { 00:15:14.238 "trtype": "RDMA", 00:15:14.238 "pending_data_buffer": 0, 00:15:14.238 "devices": [ 00:15:14.238 { 00:15:14.238 "name": "mlx5_0", 00:15:14.238 "polls": 3505701, 00:15:14.238 "idle_polls": 3505326, 00:15:14.238 "completions": 415, 00:15:14.238 "requests": 207, 00:15:14.238 "request_latency": 37982456, 00:15:14.238 "pending_free_request": 0, 00:15:14.238 "pending_rdma_read": 0, 00:15:14.238 "pending_rdma_write": 0, 00:15:14.238 "pending_rdma_send": 0, 00:15:14.238 "total_send_wrs": 359, 00:15:14.238 "send_doorbell_updates": 184, 00:15:14.238 "total_recv_wrs": 4303, 00:15:14.238 "recv_doorbell_updates": 184 00:15:14.238 }, 00:15:14.238 { 00:15:14.238 "name": "mlx5_1", 00:15:14.238 "polls": 3505701, 00:15:14.238 "idle_polls": 3505701, 00:15:14.238 "completions": 0, 00:15:14.238 "requests": 0, 00:15:14.238 "request_latency": 0, 00:15:14.238 "pending_free_request": 0, 00:15:14.238 "pending_rdma_read": 0, 00:15:14.238 "pending_rdma_write": 0, 00:15:14.238 "pending_rdma_send": 0, 00:15:14.238 "total_send_wrs": 0, 00:15:14.238 "send_doorbell_updates": 0, 00:15:14.238 "total_recv_wrs": 4096, 00:15:14.238 "recv_doorbell_updates": 1 00:15:14.238 } 00:15:14.238 ] 00:15:14.238 } 00:15:14.238 ] 00:15:14.238 }, 00:15:14.238 { 00:15:14.238 "name": "nvmf_tgt_poll_group_1", 00:15:14.238 "admin_qpairs": 2, 00:15:14.238 "io_qpairs": 26, 00:15:14.238 "current_admin_qpairs": 0, 00:15:14.238 "current_io_qpairs": 0, 00:15:14.238 "pending_bdev_io": 0, 00:15:14.238 "completed_nvme_io": 77, 00:15:14.238 "transports": [ 00:15:14.238 { 00:15:14.238 "trtype": "RDMA", 00:15:14.238 "pending_data_buffer": 0, 00:15:14.238 "devices": [ 00:15:14.238 { 00:15:14.238 "name": "mlx5_0", 00:15:14.238 "polls": 3461827, 00:15:14.238 "idle_polls": 3461587, 00:15:14.238 "completions": 260, 00:15:14.238 "requests": 130, 00:15:14.238 "request_latency": 21022904, 00:15:14.238 "pending_free_request": 0, 00:15:14.238 "pending_rdma_read": 0, 00:15:14.238 "pending_rdma_write": 0, 00:15:14.238 "pending_rdma_send": 0, 00:15:14.238 "total_send_wrs": 206, 00:15:14.238 "send_doorbell_updates": 119, 00:15:14.238 "total_recv_wrs": 4226, 00:15:14.238 "recv_doorbell_updates": 120 00:15:14.238 }, 00:15:14.238 { 00:15:14.238 "name": "mlx5_1", 00:15:14.238 "polls": 3461827, 00:15:14.238 "idle_polls": 3461827, 00:15:14.238 "completions": 0, 00:15:14.238 "requests": 0, 00:15:14.238 "request_latency": 0, 00:15:14.238 "pending_free_request": 0, 00:15:14.238 "pending_rdma_read": 0, 00:15:14.238 "pending_rdma_write": 0, 00:15:14.238 "pending_rdma_send": 0, 00:15:14.238 "total_send_wrs": 0, 00:15:14.238 "send_doorbell_updates": 0, 00:15:14.238 "total_recv_wrs": 4096, 00:15:14.238 "recv_doorbell_updates": 1 00:15:14.238 } 00:15:14.238 ] 00:15:14.238 } 00:15:14.238 ] 00:15:14.238 }, 00:15:14.238 { 00:15:14.238 "name": "nvmf_tgt_poll_group_2", 00:15:14.238 "admin_qpairs": 1, 00:15:14.238 "io_qpairs": 26, 00:15:14.238 "current_admin_qpairs": 0, 00:15:14.238 "current_io_qpairs": 0, 00:15:14.238 "pending_bdev_io": 0, 00:15:14.238 "completed_nvme_io": 100, 00:15:14.238 "transports": [ 00:15:14.238 { 00:15:14.238 "trtype": "RDMA", 00:15:14.238 "pending_data_buffer": 0, 00:15:14.238 "devices": [ 00:15:14.238 { 00:15:14.238 "name": "mlx5_0", 00:15:14.238 "polls": 3534415, 00:15:14.238 "idle_polls": 3534197, 00:15:14.238 "completions": 257, 00:15:14.238 "requests": 128, 00:15:14.238 "request_latency": 28650402, 00:15:14.238 "pending_free_request": 0, 00:15:14.238 "pending_rdma_read": 0, 00:15:14.238 "pending_rdma_write": 0, 00:15:14.238 "pending_rdma_send": 0, 00:15:14.238 "total_send_wrs": 216, 00:15:14.238 "send_doorbell_updates": 108, 00:15:14.238 "total_recv_wrs": 4224, 00:15:14.238 "recv_doorbell_updates": 108 00:15:14.238 }, 00:15:14.238 { 00:15:14.238 "name": "mlx5_1", 00:15:14.238 "polls": 3534415, 00:15:14.238 "idle_polls": 3534415, 00:15:14.238 "completions": 0, 00:15:14.238 "requests": 0, 00:15:14.238 "request_latency": 0, 00:15:14.238 "pending_free_request": 0, 00:15:14.238 "pending_rdma_read": 0, 00:15:14.238 "pending_rdma_write": 0, 00:15:14.238 "pending_rdma_send": 0, 00:15:14.238 "total_send_wrs": 0, 00:15:14.238 "send_doorbell_updates": 0, 00:15:14.238 "total_recv_wrs": 4096, 00:15:14.238 "recv_doorbell_updates": 1 00:15:14.238 } 00:15:14.238 ] 00:15:14.238 } 00:15:14.238 ] 00:15:14.238 }, 00:15:14.238 { 00:15:14.238 "name": "nvmf_tgt_poll_group_3", 00:15:14.238 "admin_qpairs": 2, 00:15:14.238 "io_qpairs": 26, 00:15:14.238 "current_admin_qpairs": 0, 00:15:14.238 "current_io_qpairs": 0, 00:15:14.238 "pending_bdev_io": 0, 00:15:14.238 "completed_nvme_io": 126, 00:15:14.238 "transports": [ 00:15:14.238 { 00:15:14.238 "trtype": "RDMA", 00:15:14.238 "pending_data_buffer": 0, 00:15:14.238 "devices": [ 00:15:14.238 { 00:15:14.238 "name": "mlx5_0", 00:15:14.238 "polls": 2735390, 00:15:14.238 "idle_polls": 2735076, 00:15:14.238 "completions": 356, 00:15:14.238 "requests": 178, 00:15:14.238 "request_latency": 35502984, 00:15:14.238 "pending_free_request": 0, 00:15:14.238 "pending_rdma_read": 0, 00:15:14.238 "pending_rdma_write": 0, 00:15:14.238 "pending_rdma_send": 0, 00:15:14.238 "total_send_wrs": 302, 00:15:14.238 "send_doorbell_updates": 153, 00:15:14.238 "total_recv_wrs": 4274, 00:15:14.238 "recv_doorbell_updates": 154 00:15:14.238 }, 00:15:14.238 { 00:15:14.238 "name": "mlx5_1", 00:15:14.238 "polls": 2735390, 00:15:14.238 "idle_polls": 2735390, 00:15:14.238 "completions": 0, 00:15:14.238 "requests": 0, 00:15:14.238 "request_latency": 0, 00:15:14.238 "pending_free_request": 0, 00:15:14.238 "pending_rdma_read": 0, 00:15:14.238 "pending_rdma_write": 0, 00:15:14.238 "pending_rdma_send": 0, 00:15:14.238 "total_send_wrs": 0, 00:15:14.238 "send_doorbell_updates": 0, 00:15:14.238 "total_recv_wrs": 4096, 00:15:14.238 "recv_doorbell_updates": 1 00:15:14.238 } 00:15:14.238 ] 00:15:14.238 } 00:15:14.238 ] 00:15:14.238 } 00:15:14.238 ] 00:15:14.238 }' 00:15:14.238 06:55:35 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:14.238 06:55:35 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:14.238 06:55:35 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:14.238 06:55:35 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:14.238 06:55:35 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:14.238 06:55:35 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:14.238 06:55:35 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:14.238 06:55:35 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:14.238 06:55:35 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:14.498 06:55:35 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:15:14.498 06:55:35 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:15:14.498 06:55:35 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:15:14.498 06:55:35 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:15:14.498 06:55:35 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:15:14.498 06:55:35 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:14.498 06:55:35 -- target/rpc.sh@117 -- # (( 1288 > 0 )) 00:15:14.498 06:55:35 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:15:14.498 06:55:35 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:15:14.498 06:55:35 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:15:14.498 06:55:35 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:14.498 06:55:35 -- target/rpc.sh@118 -- # (( 123158746 > 0 )) 00:15:14.498 06:55:35 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:14.498 06:55:35 -- target/rpc.sh@123 -- # nvmftestfini 00:15:14.498 06:55:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:14.498 06:55:35 -- nvmf/common.sh@116 -- # sync 00:15:14.498 06:55:35 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:14.498 06:55:35 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:14.498 06:55:35 -- nvmf/common.sh@119 -- # set +e 00:15:14.498 06:55:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:14.498 06:55:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:14.498 rmmod nvme_rdma 00:15:14.498 rmmod nvme_fabrics 00:15:14.498 06:55:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:14.498 06:55:36 -- nvmf/common.sh@123 -- # set -e 00:15:14.498 06:55:36 -- nvmf/common.sh@124 -- # return 0 00:15:14.498 06:55:36 -- nvmf/common.sh@477 -- # '[' -n 1292732 ']' 00:15:14.498 06:55:36 -- nvmf/common.sh@478 -- # killprocess 1292732 00:15:14.498 06:55:36 -- common/autotest_common.sh@936 -- # '[' -z 1292732 ']' 00:15:14.498 06:55:36 -- common/autotest_common.sh@940 -- # kill -0 1292732 00:15:14.498 06:55:36 -- common/autotest_common.sh@941 -- # uname 00:15:14.498 06:55:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:14.498 06:55:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1292732 00:15:14.498 06:55:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:14.498 06:55:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:14.498 06:55:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1292732' 00:15:14.498 killing process with pid 1292732 00:15:14.498 06:55:36 -- common/autotest_common.sh@955 -- # kill 1292732 00:15:14.498 06:55:36 -- common/autotest_common.sh@960 -- # wait 1292732 00:15:14.758 06:55:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:14.758 06:55:36 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:14.758 00:15:14.758 real 0m37.644s 00:15:14.758 user 2m4.661s 00:15:14.758 sys 0m6.758s 00:15:14.758 06:55:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:14.758 06:55:36 -- common/autotest_common.sh@10 -- # set +x 00:15:14.758 ************************************ 00:15:14.758 END TEST nvmf_rpc 00:15:14.758 ************************************ 00:15:15.018 06:55:36 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:15.018 06:55:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:15.018 06:55:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.018 06:55:36 -- common/autotest_common.sh@10 -- # set +x 00:15:15.018 ************************************ 00:15:15.018 START TEST nvmf_invalid 00:15:15.018 ************************************ 00:15:15.018 06:55:36 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:15.018 * Looking for test storage... 00:15:15.018 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:15.018 06:55:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:15.018 06:55:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:15.018 06:55:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:15.018 06:55:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:15.018 06:55:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:15.018 06:55:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:15.018 06:55:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:15.018 06:55:36 -- scripts/common.sh@335 -- # IFS=.-: 00:15:15.018 06:55:36 -- scripts/common.sh@335 -- # read -ra ver1 00:15:15.018 06:55:36 -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.018 06:55:36 -- scripts/common.sh@336 -- # read -ra ver2 00:15:15.018 06:55:36 -- scripts/common.sh@337 -- # local 'op=<' 00:15:15.018 06:55:36 -- scripts/common.sh@339 -- # ver1_l=2 00:15:15.018 06:55:36 -- scripts/common.sh@340 -- # ver2_l=1 00:15:15.018 06:55:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:15.018 06:55:36 -- scripts/common.sh@343 -- # case "$op" in 00:15:15.018 06:55:36 -- scripts/common.sh@344 -- # : 1 00:15:15.018 06:55:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:15.018 06:55:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.018 06:55:36 -- scripts/common.sh@364 -- # decimal 1 00:15:15.018 06:55:36 -- scripts/common.sh@352 -- # local d=1 00:15:15.018 06:55:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.018 06:55:36 -- scripts/common.sh@354 -- # echo 1 00:15:15.018 06:55:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:15.018 06:55:36 -- scripts/common.sh@365 -- # decimal 2 00:15:15.018 06:55:36 -- scripts/common.sh@352 -- # local d=2 00:15:15.018 06:55:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.018 06:55:36 -- scripts/common.sh@354 -- # echo 2 00:15:15.018 06:55:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:15.018 06:55:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:15.018 06:55:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:15.018 06:55:36 -- scripts/common.sh@367 -- # return 0 00:15:15.018 06:55:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.018 06:55:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:15.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.018 --rc genhtml_branch_coverage=1 00:15:15.018 --rc genhtml_function_coverage=1 00:15:15.018 --rc genhtml_legend=1 00:15:15.018 --rc geninfo_all_blocks=1 00:15:15.018 --rc geninfo_unexecuted_blocks=1 00:15:15.018 00:15:15.018 ' 00:15:15.018 06:55:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:15.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.018 --rc genhtml_branch_coverage=1 00:15:15.018 --rc genhtml_function_coverage=1 00:15:15.018 --rc genhtml_legend=1 00:15:15.018 --rc geninfo_all_blocks=1 00:15:15.018 --rc geninfo_unexecuted_blocks=1 00:15:15.018 00:15:15.018 ' 00:15:15.018 06:55:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:15.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.018 --rc genhtml_branch_coverage=1 00:15:15.018 --rc genhtml_function_coverage=1 00:15:15.018 --rc genhtml_legend=1 00:15:15.018 --rc geninfo_all_blocks=1 00:15:15.018 --rc geninfo_unexecuted_blocks=1 00:15:15.018 00:15:15.018 ' 00:15:15.018 06:55:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:15.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.018 --rc genhtml_branch_coverage=1 00:15:15.018 --rc genhtml_function_coverage=1 00:15:15.018 --rc genhtml_legend=1 00:15:15.018 --rc geninfo_all_blocks=1 00:15:15.018 --rc geninfo_unexecuted_blocks=1 00:15:15.018 00:15:15.018 ' 00:15:15.018 06:55:36 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.018 06:55:36 -- nvmf/common.sh@7 -- # uname -s 00:15:15.019 06:55:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.019 06:55:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.019 06:55:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.019 06:55:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.019 06:55:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.019 06:55:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.019 06:55:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.019 06:55:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.019 06:55:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.019 06:55:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.019 06:55:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:15.019 06:55:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:15.019 06:55:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.019 06:55:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.019 06:55:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.019 06:55:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:15.019 06:55:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.019 06:55:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.019 06:55:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.019 06:55:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.019 06:55:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.019 06:55:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.019 06:55:36 -- paths/export.sh@5 -- # export PATH 00:15:15.019 06:55:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.019 06:55:36 -- nvmf/common.sh@46 -- # : 0 00:15:15.019 06:55:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:15.019 06:55:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:15.019 06:55:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:15.019 06:55:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.019 06:55:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.019 06:55:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:15.019 06:55:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:15.019 06:55:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:15.019 06:55:36 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:15.019 06:55:36 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:15.019 06:55:36 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:15.019 06:55:36 -- target/invalid.sh@14 -- # target=foobar 00:15:15.019 06:55:36 -- target/invalid.sh@16 -- # RANDOM=0 00:15:15.019 06:55:36 -- target/invalid.sh@34 -- # nvmftestinit 00:15:15.019 06:55:36 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:15.019 06:55:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.019 06:55:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:15.019 06:55:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:15.019 06:55:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:15.019 06:55:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.019 06:55:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.019 06:55:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.019 06:55:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:15.019 06:55:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:15.019 06:55:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:15.019 06:55:36 -- common/autotest_common.sh@10 -- # set +x 00:15:21.634 06:55:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:21.634 06:55:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:21.634 06:55:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:21.634 06:55:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:21.634 06:55:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:21.634 06:55:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:21.634 06:55:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:21.634 06:55:43 -- nvmf/common.sh@294 -- # net_devs=() 00:15:21.634 06:55:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:21.634 06:55:43 -- nvmf/common.sh@295 -- # e810=() 00:15:21.634 06:55:43 -- nvmf/common.sh@295 -- # local -ga e810 00:15:21.634 06:55:43 -- nvmf/common.sh@296 -- # x722=() 00:15:21.634 06:55:43 -- nvmf/common.sh@296 -- # local -ga x722 00:15:21.634 06:55:43 -- nvmf/common.sh@297 -- # mlx=() 00:15:21.634 06:55:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:21.634 06:55:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.634 06:55:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.634 06:55:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.634 06:55:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.634 06:55:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.634 06:55:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.634 06:55:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.634 06:55:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.634 06:55:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.634 06:55:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.634 06:55:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.634 06:55:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:21.634 06:55:43 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:21.634 06:55:43 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:21.634 06:55:43 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:21.634 06:55:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:21.634 06:55:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:21.634 06:55:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:21.634 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:21.634 06:55:43 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:21.634 06:55:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:21.634 06:55:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:21.634 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:21.634 06:55:43 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:21.634 06:55:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:21.634 06:55:43 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:21.634 06:55:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:21.634 06:55:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.634 06:55:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:21.634 06:55:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.634 06:55:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:21.634 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:21.634 06:55:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.634 06:55:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:21.634 06:55:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.634 06:55:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:21.634 06:55:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.634 06:55:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:21.634 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:21.634 06:55:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.634 06:55:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:21.634 06:55:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:21.635 06:55:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:21.635 06:55:43 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:21.635 06:55:43 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:21.635 06:55:43 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:21.635 06:55:43 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:21.635 06:55:43 -- nvmf/common.sh@57 -- # uname 00:15:21.635 06:55:43 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:21.635 06:55:43 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:21.635 06:55:43 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:21.635 06:55:43 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:21.635 06:55:43 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:21.635 06:55:43 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:21.635 06:55:43 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:21.635 06:55:43 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:21.635 06:55:43 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:21.635 06:55:43 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:21.635 06:55:43 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:21.635 06:55:43 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:21.635 06:55:43 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:21.635 06:55:43 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:21.635 06:55:43 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:21.894 06:55:43 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:21.894 06:55:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:21.894 06:55:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:21.894 06:55:43 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:21.894 06:55:43 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:21.894 06:55:43 -- nvmf/common.sh@104 -- # continue 2 00:15:21.894 06:55:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:21.894 06:55:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:21.894 06:55:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:21.894 06:55:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:21.894 06:55:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:21.894 06:55:43 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:21.894 06:55:43 -- nvmf/common.sh@104 -- # continue 2 00:15:21.894 06:55:43 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:21.894 06:55:43 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:21.894 06:55:43 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:21.894 06:55:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:21.894 06:55:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:21.894 06:55:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:21.894 06:55:43 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:21.894 06:55:43 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:21.894 06:55:43 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:21.894 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:21.894 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:21.894 altname enp217s0f0np0 00:15:21.894 altname ens818f0np0 00:15:21.894 inet 192.168.100.8/24 scope global mlx_0_0 00:15:21.894 valid_lft forever preferred_lft forever 00:15:21.894 06:55:43 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:21.894 06:55:43 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:21.894 06:55:43 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:21.894 06:55:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:21.894 06:55:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:21.894 06:55:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:21.894 06:55:43 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:21.894 06:55:43 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:21.894 06:55:43 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:21.894 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:21.894 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:21.894 altname enp217s0f1np1 00:15:21.894 altname ens818f1np1 00:15:21.894 inet 192.168.100.9/24 scope global mlx_0_1 00:15:21.894 valid_lft forever preferred_lft forever 00:15:21.894 06:55:43 -- nvmf/common.sh@410 -- # return 0 00:15:21.894 06:55:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:21.894 06:55:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:21.894 06:55:43 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:21.894 06:55:43 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:21.894 06:55:43 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:21.894 06:55:43 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:21.894 06:55:43 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:21.894 06:55:43 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:21.894 06:55:43 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:21.894 06:55:43 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:21.894 06:55:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:21.894 06:55:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:21.894 06:55:43 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:21.894 06:55:43 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:21.894 06:55:43 -- nvmf/common.sh@104 -- # continue 2 00:15:21.894 06:55:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:21.894 06:55:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:21.894 06:55:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:21.894 06:55:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:21.894 06:55:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:21.894 06:55:43 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:21.894 06:55:43 -- nvmf/common.sh@104 -- # continue 2 00:15:21.894 06:55:43 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:21.894 06:55:43 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:21.894 06:55:43 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:21.894 06:55:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:21.894 06:55:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:21.894 06:55:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:21.894 06:55:43 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:21.894 06:55:43 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:21.894 06:55:43 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:21.894 06:55:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:21.894 06:55:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:21.894 06:55:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:21.895 06:55:43 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:21.895 192.168.100.9' 00:15:21.895 06:55:43 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:21.895 192.168.100.9' 00:15:21.895 06:55:43 -- nvmf/common.sh@445 -- # head -n 1 00:15:21.895 06:55:43 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:21.895 06:55:43 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:21.895 192.168.100.9' 00:15:21.895 06:55:43 -- nvmf/common.sh@446 -- # head -n 1 00:15:21.895 06:55:43 -- nvmf/common.sh@446 -- # tail -n +2 00:15:21.895 06:55:43 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:21.895 06:55:43 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:21.895 06:55:43 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:21.895 06:55:43 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:21.895 06:55:43 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:21.895 06:55:43 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:21.895 06:55:43 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:21.895 06:55:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:21.895 06:55:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:21.895 06:55:43 -- common/autotest_common.sh@10 -- # set +x 00:15:21.895 06:55:43 -- nvmf/common.sh@469 -- # nvmfpid=1301495 00:15:21.895 06:55:43 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:21.895 06:55:43 -- nvmf/common.sh@470 -- # waitforlisten 1301495 00:15:21.895 06:55:43 -- common/autotest_common.sh@829 -- # '[' -z 1301495 ']' 00:15:21.895 06:55:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.895 06:55:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.895 06:55:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.895 06:55:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.895 06:55:43 -- common/autotest_common.sh@10 -- # set +x 00:15:21.895 [2024-12-15 06:55:43.492556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:21.895 [2024-12-15 06:55:43.492608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.895 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.154 [2024-12-15 06:55:43.564092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.154 [2024-12-15 06:55:43.602697] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:22.154 [2024-12-15 06:55:43.602810] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.154 [2024-12-15 06:55:43.602824] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.154 [2024-12-15 06:55:43.602832] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.154 [2024-12-15 06:55:43.602928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.154 [2024-12-15 06:55:43.603054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.154 [2024-12-15 06:55:43.603028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.154 [2024-12-15 06:55:43.603057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.722 06:55:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.722 06:55:44 -- common/autotest_common.sh@862 -- # return 0 00:15:22.722 06:55:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:22.722 06:55:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:22.722 06:55:44 -- common/autotest_common.sh@10 -- # set +x 00:15:22.722 06:55:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.722 06:55:44 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:22.722 06:55:44 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30599 00:15:22.982 [2024-12-15 06:55:44.522971] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:22.982 06:55:44 -- target/invalid.sh@40 -- # out='request: 00:15:22.982 { 00:15:22.982 "nqn": "nqn.2016-06.io.spdk:cnode30599", 00:15:22.982 "tgt_name": "foobar", 00:15:22.982 "method": "nvmf_create_subsystem", 00:15:22.982 "req_id": 1 00:15:22.982 } 00:15:22.982 Got JSON-RPC error response 00:15:22.982 response: 00:15:22.982 { 00:15:22.982 "code": -32603, 00:15:22.982 "message": "Unable to find target foobar" 00:15:22.982 }' 00:15:22.982 06:55:44 -- target/invalid.sh@41 -- # [[ request: 00:15:22.982 { 00:15:22.982 "nqn": "nqn.2016-06.io.spdk:cnode30599", 00:15:22.982 "tgt_name": "foobar", 00:15:22.982 "method": "nvmf_create_subsystem", 00:15:22.982 "req_id": 1 00:15:22.982 } 00:15:22.982 Got JSON-RPC error response 00:15:22.982 response: 00:15:22.982 { 00:15:22.982 "code": -32603, 00:15:22.982 "message": "Unable to find target foobar" 00:15:22.982 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:22.982 06:55:44 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:22.982 06:55:44 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4267 00:15:23.241 [2024-12-15 06:55:44.727737] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4267: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:23.241 06:55:44 -- target/invalid.sh@45 -- # out='request: 00:15:23.241 { 00:15:23.241 "nqn": "nqn.2016-06.io.spdk:cnode4267", 00:15:23.241 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:23.241 "method": "nvmf_create_subsystem", 00:15:23.241 "req_id": 1 00:15:23.241 } 00:15:23.241 Got JSON-RPC error response 00:15:23.241 response: 00:15:23.241 { 00:15:23.241 "code": -32602, 00:15:23.241 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:23.241 }' 00:15:23.241 06:55:44 -- target/invalid.sh@46 -- # [[ request: 00:15:23.241 { 00:15:23.241 "nqn": "nqn.2016-06.io.spdk:cnode4267", 00:15:23.241 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:23.241 "method": "nvmf_create_subsystem", 00:15:23.241 "req_id": 1 00:15:23.241 } 00:15:23.241 Got JSON-RPC error response 00:15:23.241 response: 00:15:23.241 { 00:15:23.241 "code": -32602, 00:15:23.241 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:23.241 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:23.241 06:55:44 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:23.241 06:55:44 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11261 00:15:23.501 [2024-12-15 06:55:44.928389] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11261: invalid model number 'SPDK_Controller' 00:15:23.501 06:55:44 -- target/invalid.sh@50 -- # out='request: 00:15:23.501 { 00:15:23.501 "nqn": "nqn.2016-06.io.spdk:cnode11261", 00:15:23.501 "model_number": "SPDK_Controller\u001f", 00:15:23.501 "method": "nvmf_create_subsystem", 00:15:23.501 "req_id": 1 00:15:23.501 } 00:15:23.501 Got JSON-RPC error response 00:15:23.501 response: 00:15:23.501 { 00:15:23.501 "code": -32602, 00:15:23.501 "message": "Invalid MN SPDK_Controller\u001f" 00:15:23.501 }' 00:15:23.501 06:55:44 -- target/invalid.sh@51 -- # [[ request: 00:15:23.501 { 00:15:23.501 "nqn": "nqn.2016-06.io.spdk:cnode11261", 00:15:23.501 "model_number": "SPDK_Controller\u001f", 00:15:23.501 "method": "nvmf_create_subsystem", 00:15:23.501 "req_id": 1 00:15:23.501 } 00:15:23.501 Got JSON-RPC error response 00:15:23.501 response: 00:15:23.501 { 00:15:23.501 "code": -32602, 00:15:23.501 "message": "Invalid MN SPDK_Controller\u001f" 00:15:23.501 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:23.501 06:55:44 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:23.501 06:55:44 -- target/invalid.sh@19 -- # local length=21 ll 00:15:23.501 06:55:44 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:23.501 06:55:44 -- target/invalid.sh@21 -- # local chars 00:15:23.501 06:55:44 -- target/invalid.sh@22 -- # local string 00:15:23.501 06:55:44 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:23.501 06:55:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # printf %x 126 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # string+='~' 00:15:23.501 06:55:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # printf %x 98 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # string+=b 00:15:23.501 06:55:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # printf %x 118 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # string+=v 00:15:23.501 06:55:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # printf %x 79 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # string+=O 00:15:23.501 06:55:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # printf %x 97 00:15:23.501 06:55:44 -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=a 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 58 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=: 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 117 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=u 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 102 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=f 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 71 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=G 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 63 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+='?' 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 90 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=Z 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 47 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=/ 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 67 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=C 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 39 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=\' 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 96 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+='`' 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 114 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=r 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 81 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=Q 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 33 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+='!' 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 74 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=J 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 87 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=W 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # printf %x 106 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:23.501 06:55:45 -- target/invalid.sh@25 -- # string+=j 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.501 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.502 06:55:45 -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:15:23.502 06:55:45 -- target/invalid.sh@31 -- # echo '~bvOa:ufG?Z/C'\''`rQ!JWj' 00:15:23.502 06:55:45 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '~bvOa:ufG?Z/C'\''`rQ!JWj' nqn.2016-06.io.spdk:cnode2336 00:15:23.761 [2024-12-15 06:55:45.285586] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2336: invalid serial number '~bvOa:ufG?Z/C'`rQ!JWj' 00:15:23.761 06:55:45 -- target/invalid.sh@54 -- # out='request: 00:15:23.761 { 00:15:23.761 "nqn": "nqn.2016-06.io.spdk:cnode2336", 00:15:23.761 "serial_number": "~bvOa:ufG?Z/C'\''`rQ!JWj", 00:15:23.761 "method": "nvmf_create_subsystem", 00:15:23.761 "req_id": 1 00:15:23.761 } 00:15:23.761 Got JSON-RPC error response 00:15:23.761 response: 00:15:23.761 { 00:15:23.761 "code": -32602, 00:15:23.761 "message": "Invalid SN ~bvOa:ufG?Z/C'\''`rQ!JWj" 00:15:23.761 }' 00:15:23.761 06:55:45 -- target/invalid.sh@55 -- # [[ request: 00:15:23.761 { 00:15:23.761 "nqn": "nqn.2016-06.io.spdk:cnode2336", 00:15:23.761 "serial_number": "~bvOa:ufG?Z/C'`rQ!JWj", 00:15:23.761 "method": "nvmf_create_subsystem", 00:15:23.761 "req_id": 1 00:15:23.761 } 00:15:23.761 Got JSON-RPC error response 00:15:23.761 response: 00:15:23.761 { 00:15:23.761 "code": -32602, 00:15:23.761 "message": "Invalid SN ~bvOa:ufG?Z/C'`rQ!JWj" 00:15:23.761 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:23.761 06:55:45 -- target/invalid.sh@58 -- # gen_random_s 41 00:15:23.761 06:55:45 -- target/invalid.sh@19 -- # local length=41 ll 00:15:23.761 06:55:45 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:23.761 06:55:45 -- target/invalid.sh@21 -- # local chars 00:15:23.761 06:55:45 -- target/invalid.sh@22 -- # local string 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # printf %x 67 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # string+=C 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # printf %x 99 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # string+=c 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # printf %x 56 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # string+=8 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # printf %x 91 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # string+='[' 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # printf %x 74 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # string+=J 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # printf %x 46 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # string+=. 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # printf %x 77 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # string+=M 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # printf %x 97 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # string+=a 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # printf %x 122 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # string+=z 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # printf %x 116 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # string+=t 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.761 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # printf %x 99 00:15:23.761 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=c 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 104 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=h 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 72 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=H 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 112 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=p 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 64 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=@ 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 39 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=\' 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 69 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=E 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 87 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=W 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 93 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=']' 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 41 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=')' 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 33 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+='!' 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 105 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=i 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 115 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=s 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 37 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=% 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 35 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+='#' 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 40 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+='(' 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 78 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=N 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 74 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=J 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 109 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=m 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 84 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=T 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 59 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=';' 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 104 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=h 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 125 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+='}' 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 85 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=U 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 84 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=T 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 41 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=')' 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 39 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=\' 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 56 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=8 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 116 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=t 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 88 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+=X 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # printf %x 36 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:24.021 06:55:45 -- target/invalid.sh@25 -- # string+='$' 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.021 06:55:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.021 06:55:45 -- target/invalid.sh@28 -- # [[ C == \- ]] 00:15:24.021 06:55:45 -- target/invalid.sh@31 -- # echo 'Cc8[J.MaztchHp@'\''EW])!is%#(NJmT;h}UT)'\''8tX$' 00:15:24.021 06:55:45 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Cc8[J.MaztchHp@'\''EW])!is%#(NJmT;h}UT)'\''8tX$' nqn.2016-06.io.spdk:cnode16405 00:15:24.281 [2024-12-15 06:55:45.791295] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16405: invalid model number 'Cc8[J.MaztchHp@'EW])!is%#(NJmT;h}UT)'8tX$' 00:15:24.281 06:55:45 -- target/invalid.sh@58 -- # out='request: 00:15:24.281 { 00:15:24.281 "nqn": "nqn.2016-06.io.spdk:cnode16405", 00:15:24.281 "model_number": "Cc8[J.MaztchHp@'\''EW])!is%#(NJmT;h}UT)'\''8tX$", 00:15:24.281 "method": "nvmf_create_subsystem", 00:15:24.281 "req_id": 1 00:15:24.281 } 00:15:24.281 Got JSON-RPC error response 00:15:24.281 response: 00:15:24.281 { 00:15:24.281 "code": -32602, 00:15:24.281 "message": "Invalid MN Cc8[J.MaztchHp@'\''EW])!is%#(NJmT;h}UT)'\''8tX$" 00:15:24.281 }' 00:15:24.281 06:55:45 -- target/invalid.sh@59 -- # [[ request: 00:15:24.281 { 00:15:24.281 "nqn": "nqn.2016-06.io.spdk:cnode16405", 00:15:24.281 "model_number": "Cc8[J.MaztchHp@'EW])!is%#(NJmT;h}UT)'8tX$", 00:15:24.281 "method": "nvmf_create_subsystem", 00:15:24.281 "req_id": 1 00:15:24.281 } 00:15:24.281 Got JSON-RPC error response 00:15:24.281 response: 00:15:24.281 { 00:15:24.281 "code": -32602, 00:15:24.281 "message": "Invalid MN Cc8[J.MaztchHp@'EW])!is%#(NJmT;h}UT)'8tX$" 00:15:24.281 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:24.281 06:55:45 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:15:24.540 [2024-12-15 06:55:46.005599] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13059b0/0x1309ea0) succeed. 00:15:24.540 [2024-12-15 06:55:46.014737] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1306f50/0x134b540) succeed. 00:15:24.540 06:55:46 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:24.800 06:55:46 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:15:24.800 06:55:46 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:15:24.800 192.168.100.9' 00:15:24.800 06:55:46 -- target/invalid.sh@67 -- # head -n 1 00:15:24.800 06:55:46 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:15:24.800 06:55:46 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:15:25.059 [2024-12-15 06:55:46.507547] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:25.059 06:55:46 -- target/invalid.sh@69 -- # out='request: 00:15:25.059 { 00:15:25.059 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:25.059 "listen_address": { 00:15:25.059 "trtype": "rdma", 00:15:25.059 "traddr": "192.168.100.8", 00:15:25.059 "trsvcid": "4421" 00:15:25.059 }, 00:15:25.059 "method": "nvmf_subsystem_remove_listener", 00:15:25.059 "req_id": 1 00:15:25.059 } 00:15:25.059 Got JSON-RPC error response 00:15:25.059 response: 00:15:25.059 { 00:15:25.059 "code": -32602, 00:15:25.059 "message": "Invalid parameters" 00:15:25.059 }' 00:15:25.059 06:55:46 -- target/invalid.sh@70 -- # [[ request: 00:15:25.059 { 00:15:25.059 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:25.059 "listen_address": { 00:15:25.059 "trtype": "rdma", 00:15:25.059 "traddr": "192.168.100.8", 00:15:25.059 "trsvcid": "4421" 00:15:25.059 }, 00:15:25.059 "method": "nvmf_subsystem_remove_listener", 00:15:25.059 "req_id": 1 00:15:25.059 } 00:15:25.059 Got JSON-RPC error response 00:15:25.059 response: 00:15:25.059 { 00:15:25.059 "code": -32602, 00:15:25.059 "message": "Invalid parameters" 00:15:25.059 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:25.059 06:55:46 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14379 -i 0 00:15:25.059 [2024-12-15 06:55:46.696168] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14379: invalid cntlid range [0-65519] 00:15:25.318 06:55:46 -- target/invalid.sh@73 -- # out='request: 00:15:25.318 { 00:15:25.318 "nqn": "nqn.2016-06.io.spdk:cnode14379", 00:15:25.318 "min_cntlid": 0, 00:15:25.318 "method": "nvmf_create_subsystem", 00:15:25.318 "req_id": 1 00:15:25.318 } 00:15:25.318 Got JSON-RPC error response 00:15:25.318 response: 00:15:25.318 { 00:15:25.318 "code": -32602, 00:15:25.318 "message": "Invalid cntlid range [0-65519]" 00:15:25.318 }' 00:15:25.318 06:55:46 -- target/invalid.sh@74 -- # [[ request: 00:15:25.318 { 00:15:25.318 "nqn": "nqn.2016-06.io.spdk:cnode14379", 00:15:25.318 "min_cntlid": 0, 00:15:25.318 "method": "nvmf_create_subsystem", 00:15:25.318 "req_id": 1 00:15:25.318 } 00:15:25.318 Got JSON-RPC error response 00:15:25.318 response: 00:15:25.318 { 00:15:25.318 "code": -32602, 00:15:25.318 "message": "Invalid cntlid range [0-65519]" 00:15:25.318 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:25.318 06:55:46 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22585 -i 65520 00:15:25.318 [2024-12-15 06:55:46.880812] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22585: invalid cntlid range [65520-65519] 00:15:25.318 06:55:46 -- target/invalid.sh@75 -- # out='request: 00:15:25.318 { 00:15:25.318 "nqn": "nqn.2016-06.io.spdk:cnode22585", 00:15:25.318 "min_cntlid": 65520, 00:15:25.318 "method": "nvmf_create_subsystem", 00:15:25.318 "req_id": 1 00:15:25.318 } 00:15:25.318 Got JSON-RPC error response 00:15:25.318 response: 00:15:25.318 { 00:15:25.318 "code": -32602, 00:15:25.318 "message": "Invalid cntlid range [65520-65519]" 00:15:25.318 }' 00:15:25.318 06:55:46 -- target/invalid.sh@76 -- # [[ request: 00:15:25.318 { 00:15:25.318 "nqn": "nqn.2016-06.io.spdk:cnode22585", 00:15:25.318 "min_cntlid": 65520, 00:15:25.318 "method": "nvmf_create_subsystem", 00:15:25.318 "req_id": 1 00:15:25.318 } 00:15:25.318 Got JSON-RPC error response 00:15:25.318 response: 00:15:25.318 { 00:15:25.318 "code": -32602, 00:15:25.318 "message": "Invalid cntlid range [65520-65519]" 00:15:25.318 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:25.318 06:55:46 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25953 -I 0 00:15:25.577 [2024-12-15 06:55:47.069486] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25953: invalid cntlid range [1-0] 00:15:25.577 06:55:47 -- target/invalid.sh@77 -- # out='request: 00:15:25.577 { 00:15:25.577 "nqn": "nqn.2016-06.io.spdk:cnode25953", 00:15:25.577 "max_cntlid": 0, 00:15:25.577 "method": "nvmf_create_subsystem", 00:15:25.577 "req_id": 1 00:15:25.577 } 00:15:25.577 Got JSON-RPC error response 00:15:25.577 response: 00:15:25.577 { 00:15:25.577 "code": -32602, 00:15:25.577 "message": "Invalid cntlid range [1-0]" 00:15:25.577 }' 00:15:25.577 06:55:47 -- target/invalid.sh@78 -- # [[ request: 00:15:25.577 { 00:15:25.577 "nqn": "nqn.2016-06.io.spdk:cnode25953", 00:15:25.577 "max_cntlid": 0, 00:15:25.577 "method": "nvmf_create_subsystem", 00:15:25.577 "req_id": 1 00:15:25.577 } 00:15:25.577 Got JSON-RPC error response 00:15:25.577 response: 00:15:25.577 { 00:15:25.577 "code": -32602, 00:15:25.577 "message": "Invalid cntlid range [1-0]" 00:15:25.577 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:25.577 06:55:47 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29461 -I 65520 00:15:25.837 [2024-12-15 06:55:47.258187] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29461: invalid cntlid range [1-65520] 00:15:25.837 06:55:47 -- target/invalid.sh@79 -- # out='request: 00:15:25.837 { 00:15:25.837 "nqn": "nqn.2016-06.io.spdk:cnode29461", 00:15:25.837 "max_cntlid": 65520, 00:15:25.837 "method": "nvmf_create_subsystem", 00:15:25.837 "req_id": 1 00:15:25.837 } 00:15:25.837 Got JSON-RPC error response 00:15:25.837 response: 00:15:25.837 { 00:15:25.837 "code": -32602, 00:15:25.837 "message": "Invalid cntlid range [1-65520]" 00:15:25.837 }' 00:15:25.837 06:55:47 -- target/invalid.sh@80 -- # [[ request: 00:15:25.837 { 00:15:25.837 "nqn": "nqn.2016-06.io.spdk:cnode29461", 00:15:25.837 "max_cntlid": 65520, 00:15:25.837 "method": "nvmf_create_subsystem", 00:15:25.837 "req_id": 1 00:15:25.837 } 00:15:25.837 Got JSON-RPC error response 00:15:25.837 response: 00:15:25.837 { 00:15:25.837 "code": -32602, 00:15:25.837 "message": "Invalid cntlid range [1-65520]" 00:15:25.837 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:25.837 06:55:47 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22497 -i 6 -I 5 00:15:26.096 [2024-12-15 06:55:47.499042] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22497: invalid cntlid range [6-5] 00:15:26.096 06:55:47 -- target/invalid.sh@83 -- # out='request: 00:15:26.096 { 00:15:26.096 "nqn": "nqn.2016-06.io.spdk:cnode22497", 00:15:26.096 "min_cntlid": 6, 00:15:26.096 "max_cntlid": 5, 00:15:26.096 "method": "nvmf_create_subsystem", 00:15:26.096 "req_id": 1 00:15:26.096 } 00:15:26.096 Got JSON-RPC error response 00:15:26.096 response: 00:15:26.096 { 00:15:26.096 "code": -32602, 00:15:26.096 "message": "Invalid cntlid range [6-5]" 00:15:26.096 }' 00:15:26.096 06:55:47 -- target/invalid.sh@84 -- # [[ request: 00:15:26.096 { 00:15:26.096 "nqn": "nqn.2016-06.io.spdk:cnode22497", 00:15:26.096 "min_cntlid": 6, 00:15:26.096 "max_cntlid": 5, 00:15:26.096 "method": "nvmf_create_subsystem", 00:15:26.096 "req_id": 1 00:15:26.096 } 00:15:26.096 Got JSON-RPC error response 00:15:26.096 response: 00:15:26.096 { 00:15:26.096 "code": -32602, 00:15:26.096 "message": "Invalid cntlid range [6-5]" 00:15:26.096 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:26.096 06:55:47 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:26.096 06:55:47 -- target/invalid.sh@87 -- # out='request: 00:15:26.096 { 00:15:26.096 "name": "foobar", 00:15:26.096 "method": "nvmf_delete_target", 00:15:26.096 "req_id": 1 00:15:26.096 } 00:15:26.096 Got JSON-RPC error response 00:15:26.096 response: 00:15:26.096 { 00:15:26.096 "code": -32602, 00:15:26.096 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:26.096 }' 00:15:26.096 06:55:47 -- target/invalid.sh@88 -- # [[ request: 00:15:26.096 { 00:15:26.096 "name": "foobar", 00:15:26.096 "method": "nvmf_delete_target", 00:15:26.096 "req_id": 1 00:15:26.096 } 00:15:26.096 Got JSON-RPC error response 00:15:26.096 response: 00:15:26.096 { 00:15:26.096 "code": -32602, 00:15:26.096 "message": "The specified target doesn't exist, cannot delete it." 00:15:26.096 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:26.096 06:55:47 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:26.096 06:55:47 -- target/invalid.sh@91 -- # nvmftestfini 00:15:26.096 06:55:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:26.096 06:55:47 -- nvmf/common.sh@116 -- # sync 00:15:26.096 06:55:47 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:26.096 06:55:47 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:26.096 06:55:47 -- nvmf/common.sh@119 -- # set +e 00:15:26.096 06:55:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:26.096 06:55:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:26.096 rmmod nvme_rdma 00:15:26.096 rmmod nvme_fabrics 00:15:26.096 06:55:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:26.096 06:55:47 -- nvmf/common.sh@123 -- # set -e 00:15:26.096 06:55:47 -- nvmf/common.sh@124 -- # return 0 00:15:26.096 06:55:47 -- nvmf/common.sh@477 -- # '[' -n 1301495 ']' 00:15:26.096 06:55:47 -- nvmf/common.sh@478 -- # killprocess 1301495 00:15:26.096 06:55:47 -- common/autotest_common.sh@936 -- # '[' -z 1301495 ']' 00:15:26.096 06:55:47 -- common/autotest_common.sh@940 -- # kill -0 1301495 00:15:26.096 06:55:47 -- common/autotest_common.sh@941 -- # uname 00:15:26.096 06:55:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.096 06:55:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1301495 00:15:26.356 06:55:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:26.356 06:55:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:26.356 06:55:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1301495' 00:15:26.356 killing process with pid 1301495 00:15:26.356 06:55:47 -- common/autotest_common.sh@955 -- # kill 1301495 00:15:26.356 06:55:47 -- common/autotest_common.sh@960 -- # wait 1301495 00:15:26.615 06:55:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:26.615 06:55:47 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:26.615 00:15:26.615 real 0m11.586s 00:15:26.615 user 0m21.767s 00:15:26.615 sys 0m6.404s 00:15:26.616 06:55:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:26.616 06:55:47 -- common/autotest_common.sh@10 -- # set +x 00:15:26.616 ************************************ 00:15:26.616 END TEST nvmf_invalid 00:15:26.616 ************************************ 00:15:26.616 06:55:48 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:26.616 06:55:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:26.616 06:55:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:26.616 06:55:48 -- common/autotest_common.sh@10 -- # set +x 00:15:26.616 ************************************ 00:15:26.616 START TEST nvmf_abort 00:15:26.616 ************************************ 00:15:26.616 06:55:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:26.616 * Looking for test storage... 00:15:26.616 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:26.616 06:55:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:26.616 06:55:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:26.616 06:55:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:26.616 06:55:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:26.616 06:55:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:26.616 06:55:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:26.616 06:55:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:26.616 06:55:48 -- scripts/common.sh@335 -- # IFS=.-: 00:15:26.616 06:55:48 -- scripts/common.sh@335 -- # read -ra ver1 00:15:26.616 06:55:48 -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.616 06:55:48 -- scripts/common.sh@336 -- # read -ra ver2 00:15:26.616 06:55:48 -- scripts/common.sh@337 -- # local 'op=<' 00:15:26.616 06:55:48 -- scripts/common.sh@339 -- # ver1_l=2 00:15:26.616 06:55:48 -- scripts/common.sh@340 -- # ver2_l=1 00:15:26.616 06:55:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:26.616 06:55:48 -- scripts/common.sh@343 -- # case "$op" in 00:15:26.616 06:55:48 -- scripts/common.sh@344 -- # : 1 00:15:26.616 06:55:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:26.616 06:55:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.616 06:55:48 -- scripts/common.sh@364 -- # decimal 1 00:15:26.616 06:55:48 -- scripts/common.sh@352 -- # local d=1 00:15:26.616 06:55:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.616 06:55:48 -- scripts/common.sh@354 -- # echo 1 00:15:26.616 06:55:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:26.616 06:55:48 -- scripts/common.sh@365 -- # decimal 2 00:15:26.616 06:55:48 -- scripts/common.sh@352 -- # local d=2 00:15:26.616 06:55:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.616 06:55:48 -- scripts/common.sh@354 -- # echo 2 00:15:26.616 06:55:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:26.616 06:55:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:26.616 06:55:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:26.616 06:55:48 -- scripts/common.sh@367 -- # return 0 00:15:26.616 06:55:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.616 06:55:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:26.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.616 --rc genhtml_branch_coverage=1 00:15:26.616 --rc genhtml_function_coverage=1 00:15:26.616 --rc genhtml_legend=1 00:15:26.616 --rc geninfo_all_blocks=1 00:15:26.616 --rc geninfo_unexecuted_blocks=1 00:15:26.616 00:15:26.616 ' 00:15:26.616 06:55:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:26.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.616 --rc genhtml_branch_coverage=1 00:15:26.616 --rc genhtml_function_coverage=1 00:15:26.616 --rc genhtml_legend=1 00:15:26.616 --rc geninfo_all_blocks=1 00:15:26.616 --rc geninfo_unexecuted_blocks=1 00:15:26.616 00:15:26.616 ' 00:15:26.616 06:55:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:26.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.616 --rc genhtml_branch_coverage=1 00:15:26.616 --rc genhtml_function_coverage=1 00:15:26.616 --rc genhtml_legend=1 00:15:26.616 --rc geninfo_all_blocks=1 00:15:26.616 --rc geninfo_unexecuted_blocks=1 00:15:26.616 00:15:26.616 ' 00:15:26.616 06:55:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:26.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.616 --rc genhtml_branch_coverage=1 00:15:26.616 --rc genhtml_function_coverage=1 00:15:26.616 --rc genhtml_legend=1 00:15:26.616 --rc geninfo_all_blocks=1 00:15:26.616 --rc geninfo_unexecuted_blocks=1 00:15:26.616 00:15:26.616 ' 00:15:26.616 06:55:48 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.616 06:55:48 -- nvmf/common.sh@7 -- # uname -s 00:15:26.616 06:55:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.616 06:55:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.616 06:55:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.616 06:55:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.616 06:55:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.616 06:55:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.616 06:55:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.616 06:55:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.616 06:55:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.616 06:55:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.616 06:55:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:26.616 06:55:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:26.616 06:55:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.616 06:55:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.616 06:55:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.616 06:55:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:26.616 06:55:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.616 06:55:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.616 06:55:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.616 06:55:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.616 06:55:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.616 06:55:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.616 06:55:48 -- paths/export.sh@5 -- # export PATH 00:15:26.616 06:55:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.616 06:55:48 -- nvmf/common.sh@46 -- # : 0 00:15:26.616 06:55:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:26.616 06:55:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:26.616 06:55:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:26.616 06:55:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.616 06:55:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.616 06:55:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:26.616 06:55:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:26.616 06:55:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:26.876 06:55:48 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:26.876 06:55:48 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:26.876 06:55:48 -- target/abort.sh@14 -- # nvmftestinit 00:15:26.876 06:55:48 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:26.876 06:55:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.876 06:55:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:26.876 06:55:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:26.876 06:55:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:26.876 06:55:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.876 06:55:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.876 06:55:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.876 06:55:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:26.876 06:55:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:26.876 06:55:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:26.876 06:55:48 -- common/autotest_common.sh@10 -- # set +x 00:15:33.444 06:55:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:33.444 06:55:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:33.444 06:55:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:33.444 06:55:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:33.444 06:55:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:33.444 06:55:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:33.444 06:55:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:33.444 06:55:54 -- nvmf/common.sh@294 -- # net_devs=() 00:15:33.444 06:55:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:33.444 06:55:54 -- nvmf/common.sh@295 -- # e810=() 00:15:33.444 06:55:54 -- nvmf/common.sh@295 -- # local -ga e810 00:15:33.444 06:55:54 -- nvmf/common.sh@296 -- # x722=() 00:15:33.444 06:55:54 -- nvmf/common.sh@296 -- # local -ga x722 00:15:33.444 06:55:54 -- nvmf/common.sh@297 -- # mlx=() 00:15:33.444 06:55:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:33.444 06:55:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.444 06:55:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.444 06:55:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.444 06:55:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.444 06:55:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.444 06:55:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.444 06:55:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.444 06:55:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.445 06:55:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.445 06:55:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.445 06:55:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.445 06:55:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:33.445 06:55:54 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:33.445 06:55:54 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:33.445 06:55:54 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:33.445 06:55:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:33.445 06:55:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:33.445 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:33.445 06:55:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:33.445 06:55:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:33.445 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:33.445 06:55:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:33.445 06:55:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:33.445 06:55:54 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.445 06:55:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:33.445 06:55:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.445 06:55:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:33.445 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:33.445 06:55:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.445 06:55:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.445 06:55:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:33.445 06:55:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.445 06:55:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:33.445 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:33.445 06:55:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.445 06:55:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:33.445 06:55:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:33.445 06:55:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:33.445 06:55:54 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:33.445 06:55:54 -- nvmf/common.sh@57 -- # uname 00:15:33.445 06:55:54 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:33.445 06:55:54 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:33.445 06:55:54 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:33.445 06:55:54 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:33.445 06:55:54 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:33.445 06:55:54 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:33.445 06:55:54 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:33.445 06:55:54 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:33.445 06:55:54 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:33.445 06:55:54 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:33.445 06:55:54 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:33.445 06:55:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:33.445 06:55:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:33.445 06:55:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:33.445 06:55:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:33.445 06:55:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:33.445 06:55:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:33.445 06:55:54 -- nvmf/common.sh@104 -- # continue 2 00:15:33.445 06:55:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:33.445 06:55:54 -- nvmf/common.sh@104 -- # continue 2 00:15:33.445 06:55:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:33.445 06:55:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:33.445 06:55:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:33.445 06:55:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:33.445 06:55:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:33.445 06:55:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:33.445 06:55:54 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:33.445 06:55:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:33.445 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:33.445 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:33.445 altname enp217s0f0np0 00:15:33.445 altname ens818f0np0 00:15:33.445 inet 192.168.100.8/24 scope global mlx_0_0 00:15:33.445 valid_lft forever preferred_lft forever 00:15:33.445 06:55:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:33.445 06:55:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:33.445 06:55:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:33.445 06:55:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:33.445 06:55:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:33.445 06:55:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:33.445 06:55:54 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:33.445 06:55:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:33.445 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:33.445 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:33.445 altname enp217s0f1np1 00:15:33.445 altname ens818f1np1 00:15:33.445 inet 192.168.100.9/24 scope global mlx_0_1 00:15:33.445 valid_lft forever preferred_lft forever 00:15:33.445 06:55:54 -- nvmf/common.sh@410 -- # return 0 00:15:33.445 06:55:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:33.445 06:55:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:33.445 06:55:54 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:33.445 06:55:54 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:33.445 06:55:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:33.445 06:55:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:33.445 06:55:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:33.445 06:55:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:33.445 06:55:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:33.445 06:55:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:33.445 06:55:54 -- nvmf/common.sh@104 -- # continue 2 00:15:33.445 06:55:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:33.445 06:55:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:33.445 06:55:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:33.445 06:55:54 -- nvmf/common.sh@104 -- # continue 2 00:15:33.445 06:55:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:33.445 06:55:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:33.445 06:55:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:33.445 06:55:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:33.445 06:55:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:33.445 06:55:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:33.445 06:55:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:33.445 06:55:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:33.445 06:55:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:33.445 06:55:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:33.445 06:55:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:33.445 06:55:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:33.445 06:55:54 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:33.445 192.168.100.9' 00:15:33.445 06:55:54 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:33.445 192.168.100.9' 00:15:33.445 06:55:54 -- nvmf/common.sh@445 -- # head -n 1 00:15:33.445 06:55:55 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:33.445 06:55:55 -- nvmf/common.sh@446 -- # tail -n +2 00:15:33.445 06:55:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:33.445 192.168.100.9' 00:15:33.445 06:55:55 -- nvmf/common.sh@446 -- # head -n 1 00:15:33.445 06:55:55 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:33.445 06:55:55 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:33.445 06:55:55 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:33.445 06:55:55 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:33.445 06:55:55 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:33.445 06:55:55 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:33.445 06:55:55 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:33.445 06:55:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:33.446 06:55:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.446 06:55:55 -- common/autotest_common.sh@10 -- # set +x 00:15:33.446 06:55:55 -- nvmf/common.sh@469 -- # nvmfpid=1305666 00:15:33.446 06:55:55 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:33.446 06:55:55 -- nvmf/common.sh@470 -- # waitforlisten 1305666 00:15:33.446 06:55:55 -- common/autotest_common.sh@829 -- # '[' -z 1305666 ']' 00:15:33.446 06:55:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.446 06:55:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.446 06:55:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.446 06:55:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.446 06:55:55 -- common/autotest_common.sh@10 -- # set +x 00:15:33.706 [2024-12-15 06:55:55.092169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:33.706 [2024-12-15 06:55:55.092218] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.706 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.706 [2024-12-15 06:55:55.162413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:33.706 [2024-12-15 06:55:55.199187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:33.706 [2024-12-15 06:55:55.199298] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.706 [2024-12-15 06:55:55.199309] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.706 [2024-12-15 06:55:55.199318] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.706 [2024-12-15 06:55:55.199416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.706 [2024-12-15 06:55:55.199436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.706 [2024-12-15 06:55:55.199438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.641 06:55:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.641 06:55:55 -- common/autotest_common.sh@862 -- # return 0 00:15:34.641 06:55:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:34.641 06:55:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:34.641 06:55:55 -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 06:55:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.641 06:55:55 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:15:34.641 06:55:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.641 06:55:55 -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 [2024-12-15 06:55:55.987346] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x51a920/0x51edd0) succeed. 00:15:34.641 [2024-12-15 06:55:55.996391] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x51be20/0x560470) succeed. 00:15:34.641 06:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.641 06:55:56 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:34.641 06:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.641 06:55:56 -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 Malloc0 00:15:34.641 06:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.641 06:55:56 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:34.641 06:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.641 06:55:56 -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 Delay0 00:15:34.641 06:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.641 06:55:56 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:34.641 06:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.641 06:55:56 -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 06:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.641 06:55:56 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:34.641 06:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.641 06:55:56 -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 06:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.641 06:55:56 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:34.641 06:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.641 06:55:56 -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 [2024-12-15 06:55:56.149058] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:34.641 06:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.641 06:55:56 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:34.641 06:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.641 06:55:56 -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 06:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.641 06:55:56 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:34.641 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.641 [2024-12-15 06:55:56.242236] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:37.272 Initializing NVMe Controllers 00:15:37.272 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:37.272 controller IO queue size 128 less than required 00:15:37.272 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:37.272 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:37.272 Initialization complete. Launching workers. 00:15:37.272 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51251 00:15:37.272 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51312, failed to submit 62 00:15:37.272 success 51251, unsuccess 61, failed 0 00:15:37.272 06:55:58 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:37.272 06:55:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.272 06:55:58 -- common/autotest_common.sh@10 -- # set +x 00:15:37.272 06:55:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.272 06:55:58 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:37.272 06:55:58 -- target/abort.sh@38 -- # nvmftestfini 00:15:37.272 06:55:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:37.272 06:55:58 -- nvmf/common.sh@116 -- # sync 00:15:37.272 06:55:58 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:37.272 06:55:58 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:37.272 06:55:58 -- nvmf/common.sh@119 -- # set +e 00:15:37.272 06:55:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:37.272 06:55:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:37.272 rmmod nvme_rdma 00:15:37.272 rmmod nvme_fabrics 00:15:37.272 06:55:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:37.272 06:55:58 -- nvmf/common.sh@123 -- # set -e 00:15:37.272 06:55:58 -- nvmf/common.sh@124 -- # return 0 00:15:37.272 06:55:58 -- nvmf/common.sh@477 -- # '[' -n 1305666 ']' 00:15:37.272 06:55:58 -- nvmf/common.sh@478 -- # killprocess 1305666 00:15:37.272 06:55:58 -- common/autotest_common.sh@936 -- # '[' -z 1305666 ']' 00:15:37.272 06:55:58 -- common/autotest_common.sh@940 -- # kill -0 1305666 00:15:37.272 06:55:58 -- common/autotest_common.sh@941 -- # uname 00:15:37.272 06:55:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:37.272 06:55:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1305666 00:15:37.272 06:55:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:37.272 06:55:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:37.272 06:55:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1305666' 00:15:37.272 killing process with pid 1305666 00:15:37.272 06:55:58 -- common/autotest_common.sh@955 -- # kill 1305666 00:15:37.272 06:55:58 -- common/autotest_common.sh@960 -- # wait 1305666 00:15:37.272 06:55:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:37.272 06:55:58 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:37.272 00:15:37.272 real 0m10.689s 00:15:37.272 user 0m14.672s 00:15:37.272 sys 0m5.695s 00:15:37.272 06:55:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:37.272 06:55:58 -- common/autotest_common.sh@10 -- # set +x 00:15:37.272 ************************************ 00:15:37.272 END TEST nvmf_abort 00:15:37.272 ************************************ 00:15:37.272 06:55:58 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:37.273 06:55:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:37.273 06:55:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:37.273 06:55:58 -- common/autotest_common.sh@10 -- # set +x 00:15:37.273 ************************************ 00:15:37.273 START TEST nvmf_ns_hotplug_stress 00:15:37.273 ************************************ 00:15:37.273 06:55:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:37.273 * Looking for test storage... 00:15:37.273 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:37.273 06:55:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:37.273 06:55:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:37.273 06:55:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:37.556 06:55:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:37.556 06:55:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:37.556 06:55:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:37.556 06:55:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:37.556 06:55:58 -- scripts/common.sh@335 -- # IFS=.-: 00:15:37.556 06:55:58 -- scripts/common.sh@335 -- # read -ra ver1 00:15:37.556 06:55:58 -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.556 06:55:58 -- scripts/common.sh@336 -- # read -ra ver2 00:15:37.556 06:55:58 -- scripts/common.sh@337 -- # local 'op=<' 00:15:37.556 06:55:58 -- scripts/common.sh@339 -- # ver1_l=2 00:15:37.556 06:55:58 -- scripts/common.sh@340 -- # ver2_l=1 00:15:37.556 06:55:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:37.556 06:55:58 -- scripts/common.sh@343 -- # case "$op" in 00:15:37.556 06:55:58 -- scripts/common.sh@344 -- # : 1 00:15:37.557 06:55:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:37.557 06:55:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.557 06:55:58 -- scripts/common.sh@364 -- # decimal 1 00:15:37.557 06:55:58 -- scripts/common.sh@352 -- # local d=1 00:15:37.557 06:55:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.557 06:55:58 -- scripts/common.sh@354 -- # echo 1 00:15:37.557 06:55:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:37.557 06:55:58 -- scripts/common.sh@365 -- # decimal 2 00:15:37.557 06:55:58 -- scripts/common.sh@352 -- # local d=2 00:15:37.557 06:55:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.557 06:55:58 -- scripts/common.sh@354 -- # echo 2 00:15:37.557 06:55:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:37.557 06:55:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:37.557 06:55:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:37.557 06:55:58 -- scripts/common.sh@367 -- # return 0 00:15:37.557 06:55:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.557 06:55:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:37.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.557 --rc genhtml_branch_coverage=1 00:15:37.557 --rc genhtml_function_coverage=1 00:15:37.557 --rc genhtml_legend=1 00:15:37.557 --rc geninfo_all_blocks=1 00:15:37.557 --rc geninfo_unexecuted_blocks=1 00:15:37.557 00:15:37.557 ' 00:15:37.557 06:55:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:37.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.557 --rc genhtml_branch_coverage=1 00:15:37.557 --rc genhtml_function_coverage=1 00:15:37.557 --rc genhtml_legend=1 00:15:37.557 --rc geninfo_all_blocks=1 00:15:37.557 --rc geninfo_unexecuted_blocks=1 00:15:37.557 00:15:37.557 ' 00:15:37.557 06:55:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:37.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.557 --rc genhtml_branch_coverage=1 00:15:37.557 --rc genhtml_function_coverage=1 00:15:37.557 --rc genhtml_legend=1 00:15:37.557 --rc geninfo_all_blocks=1 00:15:37.557 --rc geninfo_unexecuted_blocks=1 00:15:37.557 00:15:37.557 ' 00:15:37.557 06:55:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:37.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.557 --rc genhtml_branch_coverage=1 00:15:37.557 --rc genhtml_function_coverage=1 00:15:37.557 --rc genhtml_legend=1 00:15:37.557 --rc geninfo_all_blocks=1 00:15:37.557 --rc geninfo_unexecuted_blocks=1 00:15:37.557 00:15:37.557 ' 00:15:37.557 06:55:58 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.557 06:55:58 -- nvmf/common.sh@7 -- # uname -s 00:15:37.557 06:55:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.557 06:55:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.557 06:55:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.557 06:55:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.557 06:55:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.557 06:55:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.557 06:55:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.557 06:55:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.557 06:55:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.557 06:55:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.557 06:55:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:37.557 06:55:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:37.557 06:55:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.557 06:55:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.557 06:55:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.557 06:55:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:37.557 06:55:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.557 06:55:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.557 06:55:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.557 06:55:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.557 06:55:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.557 06:55:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.557 06:55:58 -- paths/export.sh@5 -- # export PATH 00:15:37.557 06:55:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.557 06:55:58 -- nvmf/common.sh@46 -- # : 0 00:15:37.557 06:55:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:37.557 06:55:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:37.557 06:55:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:37.557 06:55:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.557 06:55:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.557 06:55:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:37.557 06:55:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:37.557 06:55:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:37.557 06:55:59 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:37.557 06:55:59 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:37.557 06:55:59 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:37.557 06:55:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.557 06:55:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:37.557 06:55:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:37.557 06:55:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:37.557 06:55:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.557 06:55:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.557 06:55:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.557 06:55:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:37.557 06:55:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:37.557 06:55:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:37.557 06:55:59 -- common/autotest_common.sh@10 -- # set +x 00:15:44.122 06:56:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:44.122 06:56:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:44.122 06:56:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:44.122 06:56:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:44.122 06:56:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:44.122 06:56:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:44.122 06:56:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:44.122 06:56:05 -- nvmf/common.sh@294 -- # net_devs=() 00:15:44.122 06:56:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:44.122 06:56:05 -- nvmf/common.sh@295 -- # e810=() 00:15:44.122 06:56:05 -- nvmf/common.sh@295 -- # local -ga e810 00:15:44.122 06:56:05 -- nvmf/common.sh@296 -- # x722=() 00:15:44.122 06:56:05 -- nvmf/common.sh@296 -- # local -ga x722 00:15:44.122 06:56:05 -- nvmf/common.sh@297 -- # mlx=() 00:15:44.122 06:56:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:44.122 06:56:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:44.122 06:56:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:44.122 06:56:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:44.122 06:56:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:44.122 06:56:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:44.123 06:56:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:44.123 06:56:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:44.123 06:56:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:44.123 06:56:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:44.123 06:56:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:44.123 06:56:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:44.123 06:56:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:44.123 06:56:05 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:44.123 06:56:05 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:44.123 06:56:05 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:44.123 06:56:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:44.123 06:56:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:44.123 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:44.123 06:56:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:44.123 06:56:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:44.123 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:44.123 06:56:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:44.123 06:56:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:44.123 06:56:05 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.123 06:56:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:44.123 06:56:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.123 06:56:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:44.123 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:44.123 06:56:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.123 06:56:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.123 06:56:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:44.123 06:56:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.123 06:56:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:44.123 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:44.123 06:56:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.123 06:56:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:44.123 06:56:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:44.123 06:56:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:44.123 06:56:05 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:44.123 06:56:05 -- nvmf/common.sh@57 -- # uname 00:15:44.123 06:56:05 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:44.123 06:56:05 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:44.123 06:56:05 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:44.123 06:56:05 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:44.123 06:56:05 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:44.123 06:56:05 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:44.123 06:56:05 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:44.123 06:56:05 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:44.123 06:56:05 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:44.123 06:56:05 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:44.123 06:56:05 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:44.123 06:56:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:44.123 06:56:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:44.123 06:56:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:44.123 06:56:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:44.123 06:56:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:44.123 06:56:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:44.123 06:56:05 -- nvmf/common.sh@104 -- # continue 2 00:15:44.123 06:56:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:44.123 06:56:05 -- nvmf/common.sh@104 -- # continue 2 00:15:44.123 06:56:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:44.123 06:56:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:44.123 06:56:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:44.123 06:56:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:44.123 06:56:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:44.123 06:56:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:44.123 06:56:05 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:44.123 06:56:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:44.123 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:44.123 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:44.123 altname enp217s0f0np0 00:15:44.123 altname ens818f0np0 00:15:44.123 inet 192.168.100.8/24 scope global mlx_0_0 00:15:44.123 valid_lft forever preferred_lft forever 00:15:44.123 06:56:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:44.123 06:56:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:44.123 06:56:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:44.123 06:56:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:44.123 06:56:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:44.123 06:56:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:44.123 06:56:05 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:44.123 06:56:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:44.123 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:44.123 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:44.123 altname enp217s0f1np1 00:15:44.123 altname ens818f1np1 00:15:44.123 inet 192.168.100.9/24 scope global mlx_0_1 00:15:44.123 valid_lft forever preferred_lft forever 00:15:44.123 06:56:05 -- nvmf/common.sh@410 -- # return 0 00:15:44.123 06:56:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:44.123 06:56:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:44.123 06:56:05 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:44.123 06:56:05 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:44.123 06:56:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:44.123 06:56:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:44.123 06:56:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:44.123 06:56:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:44.123 06:56:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:44.123 06:56:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:44.123 06:56:05 -- nvmf/common.sh@104 -- # continue 2 00:15:44.123 06:56:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:44.123 06:56:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:44.123 06:56:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:44.123 06:56:05 -- nvmf/common.sh@104 -- # continue 2 00:15:44.123 06:56:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:44.123 06:56:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:44.123 06:56:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:44.123 06:56:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:44.123 06:56:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:44.123 06:56:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:44.123 06:56:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:44.123 06:56:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:44.123 06:56:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:44.123 06:56:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:44.123 06:56:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:44.123 06:56:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:44.123 06:56:05 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:44.123 192.168.100.9' 00:15:44.123 06:56:05 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:44.123 192.168.100.9' 00:15:44.123 06:56:05 -- nvmf/common.sh@445 -- # head -n 1 00:15:44.123 06:56:05 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:44.123 06:56:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:44.123 192.168.100.9' 00:15:44.123 06:56:05 -- nvmf/common.sh@446 -- # tail -n +2 00:15:44.123 06:56:05 -- nvmf/common.sh@446 -- # head -n 1 00:15:44.123 06:56:05 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:44.123 06:56:05 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:44.124 06:56:05 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:44.124 06:56:05 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:44.124 06:56:05 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:44.124 06:56:05 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:44.124 06:56:05 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:44.124 06:56:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:44.124 06:56:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.124 06:56:05 -- common/autotest_common.sh@10 -- # set +x 00:15:44.124 06:56:05 -- nvmf/common.sh@469 -- # nvmfpid=1309672 00:15:44.124 06:56:05 -- nvmf/common.sh@470 -- # waitforlisten 1309672 00:15:44.124 06:56:05 -- common/autotest_common.sh@829 -- # '[' -z 1309672 ']' 00:15:44.124 06:56:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.124 06:56:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.124 06:56:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.124 06:56:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.124 06:56:05 -- common/autotest_common.sh@10 -- # set +x 00:15:44.124 06:56:05 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:44.124 [2024-12-15 06:56:05.691050] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:44.124 [2024-12-15 06:56:05.691104] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.124 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.382 [2024-12-15 06:56:05.762777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:44.382 [2024-12-15 06:56:05.799720] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:44.382 [2024-12-15 06:56:05.799847] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.382 [2024-12-15 06:56:05.799858] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.382 [2024-12-15 06:56:05.799866] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.382 [2024-12-15 06:56:05.799970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.382 [2024-12-15 06:56:05.800053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.382 [2024-12-15 06:56:05.800055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.949 06:56:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.949 06:56:06 -- common/autotest_common.sh@862 -- # return 0 00:15:44.949 06:56:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:44.949 06:56:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:44.949 06:56:06 -- common/autotest_common.sh@10 -- # set +x 00:15:44.949 06:56:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.949 06:56:06 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:44.949 06:56:06 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:45.208 [2024-12-15 06:56:06.736922] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14e7900/0x14ebdb0) succeed. 00:15:45.208 [2024-12-15 06:56:06.746021] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14e8e00/0x152d450) succeed. 00:15:45.466 06:56:06 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:45.466 06:56:07 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:45.724 [2024-12-15 06:56:07.204704] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:45.724 06:56:07 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:45.982 06:56:07 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:45.982 Malloc0 00:15:45.982 06:56:07 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:46.240 Delay0 00:15:46.240 06:56:07 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:46.499 06:56:07 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:46.499 NULL1 00:15:46.757 06:56:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:46.757 06:56:08 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1310231 00:15:46.757 06:56:08 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:46.757 06:56:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:15:46.757 06:56:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.757 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.133 Read completed with error (sct=0, sc=11) 00:15:48.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.133 06:56:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:48.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.133 06:56:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:48.133 06:56:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:48.391 true 00:15:48.391 06:56:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:15:48.391 06:56:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.324 06:56:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:49.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.324 06:56:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:49.324 06:56:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:49.582 true 00:15:49.582 06:56:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:15:49.582 06:56:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.517 06:56:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.517 06:56:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:50.517 06:56:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:50.775 true 00:15:50.775 06:56:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:15:50.775 06:56:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.711 06:56:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.711 06:56:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:51.711 06:56:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:51.970 true 00:15:51.970 06:56:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:15:51.970 06:56:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.905 06:56:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:52.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.905 06:56:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:52.905 06:56:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:53.164 true 00:15:53.164 06:56:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:15:53.164 06:56:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.101 06:56:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.101 06:56:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:54.101 06:56:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:54.360 true 00:15:54.360 06:56:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:15:54.360 06:56:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.297 06:56:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:55.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.297 06:56:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:55.297 06:56:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:55.557 true 00:15:55.557 06:56:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:15:55.557 06:56:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.495 06:56:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:56.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.495 06:56:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:56.495 06:56:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:56.754 true 00:15:56.754 06:56:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:15:56.754 06:56:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.694 06:56:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:57.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.694 06:56:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:57.694 06:56:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:57.953 true 00:15:57.953 06:56:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:15:57.953 06:56:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.891 06:56:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:58.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.891 06:56:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:58.891 06:56:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:59.150 true 00:15:59.150 06:56:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:15:59.150 06:56:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.087 06:56:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:00.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.087 06:56:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:16:00.087 06:56:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:00.346 true 00:16:00.346 06:56:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:00.346 06:56:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.283 06:56:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:01.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.542 06:56:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:16:01.542 06:56:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:01.542 true 00:16:01.542 06:56:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:01.542 06:56:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.480 06:56:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:02.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.738 06:56:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:16:02.738 06:56:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:02.738 true 00:16:02.738 06:56:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:02.738 06:56:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:03.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.678 06:56:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:03.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:03.938 06:56:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:16:03.938 06:56:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:03.938 true 00:16:03.938 06:56:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:03.938 06:56:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.875 06:56:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:04.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.134 06:56:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:16:05.134 06:56:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:05.134 true 00:16:05.134 06:56:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:05.134 06:56:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.072 06:56:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:06.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.331 06:56:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:16:06.331 06:56:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:06.331 true 00:16:06.331 06:56:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:06.331 06:56:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.269 06:56:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:07.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.528 06:56:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:16:07.528 06:56:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:07.788 true 00:16:07.788 06:56:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:07.788 06:56:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:08.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.725 06:56:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:08.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.725 06:56:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:16:08.725 06:56:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:08.984 true 00:16:08.984 06:56:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:08.984 06:56:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.922 06:56:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:09.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.922 06:56:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:16:09.922 06:56:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:10.181 true 00:16:10.181 06:56:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:10.181 06:56:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.119 06:56:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:11.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.119 06:56:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:16:11.119 06:56:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:11.378 true 00:16:11.378 06:56:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:11.378 06:56:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.403 06:56:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:12.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.403 06:56:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:16:12.403 06:56:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:12.403 true 00:16:12.403 06:56:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:12.403 06:56:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.341 06:56:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:13.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.602 06:56:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:16:13.602 06:56:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:13.862 true 00:16:13.862 06:56:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:13.862 06:56:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.798 06:56:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:14.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.799 06:56:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:16:14.799 06:56:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:15.058 true 00:16:15.058 06:56:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:15.058 06:56:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:15.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.995 06:56:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:15.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.995 06:56:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:15.995 06:56:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:16.253 true 00:16:16.254 06:56:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:16.254 06:56:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.192 06:56:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:17.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.192 06:56:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:17.192 06:56:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:17.451 true 00:16:17.451 06:56:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:17.451 06:56:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.710 06:56:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:17.710 06:56:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:17.710 06:56:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:17.969 true 00:16:17.969 06:56:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:17.969 06:56:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.228 06:56:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:18.228 06:56:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:18.228 06:56:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:18.487 true 00:16:18.487 06:56:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:18.487 06:56:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.746 06:56:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:19.005 06:56:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:19.005 06:56:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:19.005 true 00:16:19.005 06:56:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:19.005 06:56:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.005 Initializing NVMe Controllers 00:16:19.005 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:19.005 Controller IO queue size 128, less than required. 00:16:19.005 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:19.005 Controller IO queue size 128, less than required. 00:16:19.005 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:19.005 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:19.005 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:19.005 Initialization complete. Launching workers. 00:16:19.005 ======================================================== 00:16:19.005 Latency(us) 00:16:19.005 Device Information : IOPS MiB/s Average min max 00:16:19.005 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6052.23 2.96 18426.91 879.05 1132591.04 00:16:19.005 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35851.20 17.51 3570.29 1945.73 279994.11 00:16:19.005 ======================================================== 00:16:19.005 Total : 41903.43 20.46 5716.08 879.05 1132591.04 00:16:19.005 00:16:19.264 06:56:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:19.524 06:56:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:19.524 06:56:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:19.524 true 00:16:19.524 06:56:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310231 00:16:19.524 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1310231) - No such process 00:16:19.524 06:56:41 -- target/ns_hotplug_stress.sh@53 -- # wait 1310231 00:16:19.524 06:56:41 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.783 06:56:41 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:20.041 06:56:41 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:20.041 06:56:41 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:20.041 06:56:41 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:20.041 06:56:41 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:20.041 06:56:41 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:20.041 null0 00:16:20.299 06:56:41 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:20.299 06:56:41 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:20.300 06:56:41 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:20.300 null1 00:16:20.300 06:56:41 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:20.300 06:56:41 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:20.300 06:56:41 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:20.559 null2 00:16:20.559 06:56:42 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:20.559 06:56:42 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:20.559 06:56:42 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:20.818 null3 00:16:20.818 06:56:42 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:20.818 06:56:42 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:20.818 06:56:42 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:20.818 null4 00:16:20.818 06:56:42 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:20.818 06:56:42 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:20.818 06:56:42 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:21.077 null5 00:16:21.077 06:56:42 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:21.077 06:56:42 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:21.077 06:56:42 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:21.337 null6 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:21.337 null7 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:21.337 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@66 -- # wait 1316273 1316274 1316275 1316277 1316280 1316281 1316283 1316285 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.338 06:56:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:21.596 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:21.596 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:21.596 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:21.596 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:21.596 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:21.596 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:21.596 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:21.596 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:21.855 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:21.855 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.855 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:21.855 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:21.856 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.115 06:56:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:22.376 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:22.376 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.376 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:22.376 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:22.376 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:22.376 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:22.376 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:22.376 06:56:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.635 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.894 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:22.895 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.895 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:22.895 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:22.895 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:22.895 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:23.153 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:23.153 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:23.153 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.153 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:23.153 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:23.153 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:23.153 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:23.153 06:56:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.412 06:56:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:23.412 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.678 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:23.679 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:23.938 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:23.938 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:23.938 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:23.938 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:23.938 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.938 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:23.938 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:23.938 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:24.197 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:24.456 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:24.456 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:24.456 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.456 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:24.456 06:56:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.456 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:24.716 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:24.716 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:24.716 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:24.716 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.716 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:24.716 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:24.716 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:24.716 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:24.975 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:25.235 06:56:46 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:25.235 06:56:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:25.235 06:56:46 -- nvmf/common.sh@116 -- # sync 00:16:25.235 06:56:46 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:25.235 06:56:46 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:25.235 06:56:46 -- nvmf/common.sh@119 -- # set +e 00:16:25.235 06:56:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:25.235 06:56:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:25.235 rmmod nvme_rdma 00:16:25.235 rmmod nvme_fabrics 00:16:25.235 06:56:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:25.494 06:56:46 -- nvmf/common.sh@123 -- # set -e 00:16:25.494 06:56:46 -- nvmf/common.sh@124 -- # return 0 00:16:25.494 06:56:46 -- nvmf/common.sh@477 -- # '[' -n 1309672 ']' 00:16:25.494 06:56:46 -- nvmf/common.sh@478 -- # killprocess 1309672 00:16:25.494 06:56:46 -- common/autotest_common.sh@936 -- # '[' -z 1309672 ']' 00:16:25.494 06:56:46 -- common/autotest_common.sh@940 -- # kill -0 1309672 00:16:25.494 06:56:46 -- common/autotest_common.sh@941 -- # uname 00:16:25.494 06:56:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:25.494 06:56:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1309672 00:16:25.494 06:56:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:25.494 06:56:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:25.494 06:56:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1309672' 00:16:25.494 killing process with pid 1309672 00:16:25.494 06:56:46 -- common/autotest_common.sh@955 -- # kill 1309672 00:16:25.494 06:56:46 -- common/autotest_common.sh@960 -- # wait 1309672 00:16:25.754 06:56:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:25.754 06:56:47 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:25.754 00:16:25.754 real 0m48.398s 00:16:25.754 user 3m18.254s 00:16:25.754 sys 0m13.636s 00:16:25.754 06:56:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:25.754 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:16:25.754 ************************************ 00:16:25.754 END TEST nvmf_ns_hotplug_stress 00:16:25.754 ************************************ 00:16:25.754 06:56:47 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:25.754 06:56:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:25.754 06:56:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.754 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:16:25.754 ************************************ 00:16:25.754 START TEST nvmf_connect_stress 00:16:25.754 ************************************ 00:16:25.754 06:56:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:25.754 * Looking for test storage... 00:16:25.754 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:25.754 06:56:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:25.754 06:56:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:25.754 06:56:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:25.754 06:56:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:25.754 06:56:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:25.754 06:56:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:25.754 06:56:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:25.754 06:56:47 -- scripts/common.sh@335 -- # IFS=.-: 00:16:25.754 06:56:47 -- scripts/common.sh@335 -- # read -ra ver1 00:16:25.754 06:56:47 -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.754 06:56:47 -- scripts/common.sh@336 -- # read -ra ver2 00:16:25.754 06:56:47 -- scripts/common.sh@337 -- # local 'op=<' 00:16:25.754 06:56:47 -- scripts/common.sh@339 -- # ver1_l=2 00:16:25.754 06:56:47 -- scripts/common.sh@340 -- # ver2_l=1 00:16:25.754 06:56:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:25.754 06:56:47 -- scripts/common.sh@343 -- # case "$op" in 00:16:25.754 06:56:47 -- scripts/common.sh@344 -- # : 1 00:16:25.754 06:56:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:25.754 06:56:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.754 06:56:47 -- scripts/common.sh@364 -- # decimal 1 00:16:25.754 06:56:47 -- scripts/common.sh@352 -- # local d=1 00:16:25.754 06:56:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.755 06:56:47 -- scripts/common.sh@354 -- # echo 1 00:16:25.755 06:56:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:25.755 06:56:47 -- scripts/common.sh@365 -- # decimal 2 00:16:25.755 06:56:47 -- scripts/common.sh@352 -- # local d=2 00:16:25.755 06:56:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.755 06:56:47 -- scripts/common.sh@354 -- # echo 2 00:16:25.755 06:56:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:25.755 06:56:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:25.755 06:56:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:25.755 06:56:47 -- scripts/common.sh@367 -- # return 0 00:16:25.755 06:56:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.755 06:56:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:25.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.755 --rc genhtml_branch_coverage=1 00:16:25.755 --rc genhtml_function_coverage=1 00:16:25.755 --rc genhtml_legend=1 00:16:25.755 --rc geninfo_all_blocks=1 00:16:25.755 --rc geninfo_unexecuted_blocks=1 00:16:25.755 00:16:25.755 ' 00:16:25.755 06:56:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:25.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.755 --rc genhtml_branch_coverage=1 00:16:25.755 --rc genhtml_function_coverage=1 00:16:25.755 --rc genhtml_legend=1 00:16:25.755 --rc geninfo_all_blocks=1 00:16:25.755 --rc geninfo_unexecuted_blocks=1 00:16:25.755 00:16:25.755 ' 00:16:26.014 06:56:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:26.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.014 --rc genhtml_branch_coverage=1 00:16:26.014 --rc genhtml_function_coverage=1 00:16:26.015 --rc genhtml_legend=1 00:16:26.015 --rc geninfo_all_blocks=1 00:16:26.015 --rc geninfo_unexecuted_blocks=1 00:16:26.015 00:16:26.015 ' 00:16:26.015 06:56:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:26.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.015 --rc genhtml_branch_coverage=1 00:16:26.015 --rc genhtml_function_coverage=1 00:16:26.015 --rc genhtml_legend=1 00:16:26.015 --rc geninfo_all_blocks=1 00:16:26.015 --rc geninfo_unexecuted_blocks=1 00:16:26.015 00:16:26.015 ' 00:16:26.015 06:56:47 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.015 06:56:47 -- nvmf/common.sh@7 -- # uname -s 00:16:26.015 06:56:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.015 06:56:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.015 06:56:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.015 06:56:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.015 06:56:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.015 06:56:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.015 06:56:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.015 06:56:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.015 06:56:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.015 06:56:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.015 06:56:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:26.015 06:56:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:26.015 06:56:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.015 06:56:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.015 06:56:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.015 06:56:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:26.015 06:56:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.015 06:56:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.015 06:56:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.015 06:56:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.015 06:56:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.015 06:56:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.015 06:56:47 -- paths/export.sh@5 -- # export PATH 00:16:26.015 06:56:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.015 06:56:47 -- nvmf/common.sh@46 -- # : 0 00:16:26.015 06:56:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:26.015 06:56:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:26.015 06:56:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:26.015 06:56:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.015 06:56:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.015 06:56:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:26.015 06:56:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:26.015 06:56:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:26.015 06:56:47 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:26.015 06:56:47 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:26.015 06:56:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.015 06:56:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:26.015 06:56:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:26.015 06:56:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:26.015 06:56:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.015 06:56:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.015 06:56:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.015 06:56:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:26.015 06:56:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:26.015 06:56:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:26.015 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:16:32.591 06:56:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:32.591 06:56:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:32.591 06:56:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:32.591 06:56:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:32.591 06:56:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:32.591 06:56:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:32.591 06:56:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:32.591 06:56:53 -- nvmf/common.sh@294 -- # net_devs=() 00:16:32.591 06:56:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:32.591 06:56:53 -- nvmf/common.sh@295 -- # e810=() 00:16:32.591 06:56:53 -- nvmf/common.sh@295 -- # local -ga e810 00:16:32.591 06:56:53 -- nvmf/common.sh@296 -- # x722=() 00:16:32.591 06:56:53 -- nvmf/common.sh@296 -- # local -ga x722 00:16:32.591 06:56:53 -- nvmf/common.sh@297 -- # mlx=() 00:16:32.591 06:56:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:32.591 06:56:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:32.591 06:56:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:32.591 06:56:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:32.591 06:56:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:32.591 06:56:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:32.591 06:56:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:32.591 06:56:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:32.591 06:56:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:32.591 06:56:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:32.591 06:56:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:32.591 06:56:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:32.591 06:56:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:32.591 06:56:53 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:32.591 06:56:53 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:32.591 06:56:53 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:32.591 06:56:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:32.591 06:56:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:32.591 06:56:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:32.591 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:32.591 06:56:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:32.591 06:56:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:32.591 06:56:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:32.591 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:32.591 06:56:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:32.591 06:56:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:32.591 06:56:53 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:32.591 06:56:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.591 06:56:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:32.591 06:56:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.591 06:56:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:32.591 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:32.591 06:56:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.591 06:56:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:32.591 06:56:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.591 06:56:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:32.591 06:56:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.591 06:56:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:32.591 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:32.591 06:56:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.591 06:56:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:32.591 06:56:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:32.591 06:56:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:32.591 06:56:53 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:32.591 06:56:53 -- nvmf/common.sh@57 -- # uname 00:16:32.591 06:56:53 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:32.591 06:56:53 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:32.591 06:56:53 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:32.591 06:56:53 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:32.591 06:56:53 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:32.591 06:56:53 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:32.591 06:56:53 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:32.591 06:56:53 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:32.591 06:56:53 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:32.591 06:56:53 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:32.591 06:56:53 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:32.591 06:56:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:32.591 06:56:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:32.591 06:56:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:32.591 06:56:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:32.591 06:56:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:32.591 06:56:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:32.591 06:56:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:32.591 06:56:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:32.591 06:56:53 -- nvmf/common.sh@104 -- # continue 2 00:16:32.591 06:56:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:32.591 06:56:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:32.591 06:56:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:32.591 06:56:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:32.591 06:56:53 -- nvmf/common.sh@104 -- # continue 2 00:16:32.591 06:56:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:32.591 06:56:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:32.591 06:56:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:32.591 06:56:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:32.591 06:56:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:32.591 06:56:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:32.591 06:56:53 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:32.591 06:56:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:32.591 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:32.591 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:32.591 altname enp217s0f0np0 00:16:32.591 altname ens818f0np0 00:16:32.591 inet 192.168.100.8/24 scope global mlx_0_0 00:16:32.591 valid_lft forever preferred_lft forever 00:16:32.591 06:56:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:32.591 06:56:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:32.591 06:56:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:32.591 06:56:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:32.591 06:56:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:32.591 06:56:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:32.591 06:56:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:32.591 06:56:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:32.591 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:32.591 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:32.591 altname enp217s0f1np1 00:16:32.591 altname ens818f1np1 00:16:32.591 inet 192.168.100.9/24 scope global mlx_0_1 00:16:32.591 valid_lft forever preferred_lft forever 00:16:32.591 06:56:53 -- nvmf/common.sh@410 -- # return 0 00:16:32.591 06:56:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:32.591 06:56:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:32.591 06:56:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:32.591 06:56:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:32.591 06:56:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:32.591 06:56:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:32.591 06:56:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:32.591 06:56:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:32.591 06:56:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:32.591 06:56:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:32.591 06:56:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:32.591 06:56:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:32.591 06:56:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:32.591 06:56:53 -- nvmf/common.sh@104 -- # continue 2 00:16:32.591 06:56:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:32.591 06:56:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:32.592 06:56:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:32.592 06:56:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:32.592 06:56:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:32.592 06:56:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:32.592 06:56:53 -- nvmf/common.sh@104 -- # continue 2 00:16:32.592 06:56:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:32.592 06:56:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:32.592 06:56:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:32.592 06:56:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:32.592 06:56:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:32.592 06:56:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:32.592 06:56:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:32.592 06:56:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:32.592 06:56:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:32.592 06:56:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:32.592 06:56:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:32.592 06:56:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:32.592 06:56:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:32.592 192.168.100.9' 00:16:32.592 06:56:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:32.592 192.168.100.9' 00:16:32.592 06:56:53 -- nvmf/common.sh@445 -- # head -n 1 00:16:32.592 06:56:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:32.592 06:56:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:32.592 192.168.100.9' 00:16:32.592 06:56:53 -- nvmf/common.sh@446 -- # tail -n +2 00:16:32.592 06:56:53 -- nvmf/common.sh@446 -- # head -n 1 00:16:32.592 06:56:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:32.592 06:56:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:32.592 06:56:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:32.592 06:56:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:32.592 06:56:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:32.592 06:56:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:32.592 06:56:53 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:32.592 06:56:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:32.592 06:56:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:32.592 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:16:32.592 06:56:53 -- nvmf/common.sh@469 -- # nvmfpid=1320421 00:16:32.592 06:56:53 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:32.592 06:56:53 -- nvmf/common.sh@470 -- # waitforlisten 1320421 00:16:32.592 06:56:53 -- common/autotest_common.sh@829 -- # '[' -z 1320421 ']' 00:16:32.592 06:56:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.592 06:56:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.592 06:56:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.592 06:56:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.592 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:16:32.592 [2024-12-15 06:56:53.895269] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:32.592 [2024-12-15 06:56:53.895324] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.592 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.592 [2024-12-15 06:56:53.967584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:32.592 [2024-12-15 06:56:54.004902] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:32.592 [2024-12-15 06:56:54.005023] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.592 [2024-12-15 06:56:54.005033] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.592 [2024-12-15 06:56:54.005041] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.592 [2024-12-15 06:56:54.005144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.592 [2024-12-15 06:56:54.005210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.592 [2024-12-15 06:56:54.005212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.160 06:56:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.160 06:56:54 -- common/autotest_common.sh@862 -- # return 0 00:16:33.160 06:56:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:33.160 06:56:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:33.160 06:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:33.160 06:56:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.160 06:56:54 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:33.160 06:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.160 06:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:33.160 [2024-12-15 06:56:54.784855] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x86e900/0x872db0) succeed. 00:16:33.160 [2024-12-15 06:56:54.793983] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x86fe00/0x8b4450) succeed. 00:16:33.419 06:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.419 06:56:54 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:33.419 06:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.419 06:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:33.419 06:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.419 06:56:54 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:33.419 06:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.419 06:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:33.420 [2024-12-15 06:56:54.909453] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:33.420 06:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.420 06:56:54 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:33.420 06:56:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.420 06:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:33.420 NULL1 00:16:33.420 06:56:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.420 06:56:54 -- target/connect_stress.sh@21 -- # PERF_PID=1320650 00:16:33.420 06:56:54 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:33.420 06:56:54 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:33.420 06:56:54 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:54 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:55 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:55 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:55 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:55 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:33.420 06:56:55 -- target/connect_stress.sh@28 -- # cat 00:16:33.420 06:56:55 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:33.420 06:56:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.420 06:56:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.420 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:16:33.987 06:56:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.987 06:56:55 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:33.987 06:56:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.987 06:56:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.987 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:16:34.246 06:56:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.246 06:56:55 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:34.246 06:56:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.246 06:56:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.246 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:16:34.505 06:56:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.505 06:56:55 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:34.505 06:56:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.505 06:56:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.505 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:16:34.764 06:56:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.764 06:56:56 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:34.764 06:56:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.764 06:56:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.764 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.023 06:56:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.023 06:56:56 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:35.023 06:56:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.023 06:56:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.023 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.592 06:56:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.592 06:56:56 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:35.592 06:56:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.592 06:56:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.592 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:16:35.851 06:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.851 06:56:57 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:35.851 06:56:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.851 06:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.851 06:56:57 -- common/autotest_common.sh@10 -- # set +x 00:16:36.110 06:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.110 06:56:57 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:36.110 06:56:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.110 06:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.110 06:56:57 -- common/autotest_common.sh@10 -- # set +x 00:16:36.369 06:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.369 06:56:57 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:36.369 06:56:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.369 06:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.369 06:56:57 -- common/autotest_common.sh@10 -- # set +x 00:16:36.938 06:56:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.938 06:56:58 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:36.938 06:56:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.938 06:56:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.938 06:56:58 -- common/autotest_common.sh@10 -- # set +x 00:16:37.197 06:56:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.197 06:56:58 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:37.197 06:56:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.197 06:56:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.197 06:56:58 -- common/autotest_common.sh@10 -- # set +x 00:16:37.456 06:56:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.456 06:56:58 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:37.456 06:56:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.456 06:56:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.456 06:56:58 -- common/autotest_common.sh@10 -- # set +x 00:16:37.715 06:56:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.715 06:56:59 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:37.715 06:56:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.715 06:56:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.715 06:56:59 -- common/autotest_common.sh@10 -- # set +x 00:16:37.974 06:56:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.974 06:56:59 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:37.974 06:56:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.974 06:56:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.974 06:56:59 -- common/autotest_common.sh@10 -- # set +x 00:16:38.542 06:56:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.542 06:56:59 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:38.542 06:56:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.542 06:56:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.542 06:56:59 -- common/autotest_common.sh@10 -- # set +x 00:16:38.801 06:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.801 06:57:00 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:38.801 06:57:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.801 06:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.801 06:57:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.059 06:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.059 06:57:00 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:39.059 06:57:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.059 06:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.059 06:57:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.318 06:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.318 06:57:00 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:39.318 06:57:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.318 06:57:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.318 06:57:00 -- common/autotest_common.sh@10 -- # set +x 00:16:39.577 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.577 06:57:01 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:39.577 06:57:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.577 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.577 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.143 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.143 06:57:01 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:40.143 06:57:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.143 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.143 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.402 06:57:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.402 06:57:01 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:40.402 06:57:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.402 06:57:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.402 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:16:40.662 06:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.662 06:57:02 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:40.662 06:57:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.662 06:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.662 06:57:02 -- common/autotest_common.sh@10 -- # set +x 00:16:40.921 06:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.921 06:57:02 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:40.921 06:57:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.921 06:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.921 06:57:02 -- common/autotest_common.sh@10 -- # set +x 00:16:41.489 06:57:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.489 06:57:02 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:41.489 06:57:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.489 06:57:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.489 06:57:02 -- common/autotest_common.sh@10 -- # set +x 00:16:41.748 06:57:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.748 06:57:03 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:41.748 06:57:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.748 06:57:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.748 06:57:03 -- common/autotest_common.sh@10 -- # set +x 00:16:42.008 06:57:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.008 06:57:03 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:42.008 06:57:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.008 06:57:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.008 06:57:03 -- common/autotest_common.sh@10 -- # set +x 00:16:42.267 06:57:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.267 06:57:03 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:42.267 06:57:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.267 06:57:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.267 06:57:03 -- common/autotest_common.sh@10 -- # set +x 00:16:42.526 06:57:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.526 06:57:04 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:42.526 06:57:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.526 06:57:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.526 06:57:04 -- common/autotest_common.sh@10 -- # set +x 00:16:43.094 06:57:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.094 06:57:04 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:43.094 06:57:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.094 06:57:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.094 06:57:04 -- common/autotest_common.sh@10 -- # set +x 00:16:43.353 06:57:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.353 06:57:04 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:43.353 06:57:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.353 06:57:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.353 06:57:04 -- common/autotest_common.sh@10 -- # set +x 00:16:43.612 06:57:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.612 06:57:05 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:43.612 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:43.612 06:57:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.612 06:57:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.612 06:57:05 -- common/autotest_common.sh@10 -- # set +x 00:16:43.870 06:57:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.870 06:57:05 -- target/connect_stress.sh@34 -- # kill -0 1320650 00:16:43.870 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1320650) - No such process 00:16:43.870 06:57:05 -- target/connect_stress.sh@38 -- # wait 1320650 00:16:43.870 06:57:05 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:43.870 06:57:05 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:43.870 06:57:05 -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:43.870 06:57:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:43.870 06:57:05 -- nvmf/common.sh@116 -- # sync 00:16:43.870 06:57:05 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:43.870 06:57:05 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:43.870 06:57:05 -- nvmf/common.sh@119 -- # set +e 00:16:43.870 06:57:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:43.870 06:57:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:43.870 rmmod nvme_rdma 00:16:43.870 rmmod nvme_fabrics 00:16:44.148 06:57:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:44.148 06:57:05 -- nvmf/common.sh@123 -- # set -e 00:16:44.148 06:57:05 -- nvmf/common.sh@124 -- # return 0 00:16:44.148 06:57:05 -- nvmf/common.sh@477 -- # '[' -n 1320421 ']' 00:16:44.148 06:57:05 -- nvmf/common.sh@478 -- # killprocess 1320421 00:16:44.148 06:57:05 -- common/autotest_common.sh@936 -- # '[' -z 1320421 ']' 00:16:44.148 06:57:05 -- common/autotest_common.sh@940 -- # kill -0 1320421 00:16:44.148 06:57:05 -- common/autotest_common.sh@941 -- # uname 00:16:44.148 06:57:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:44.148 06:57:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1320421 00:16:44.148 06:57:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:44.148 06:57:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:44.148 06:57:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1320421' 00:16:44.148 killing process with pid 1320421 00:16:44.148 06:57:05 -- common/autotest_common.sh@955 -- # kill 1320421 00:16:44.148 06:57:05 -- common/autotest_common.sh@960 -- # wait 1320421 00:16:44.554 06:57:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:44.554 06:57:05 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:44.554 00:16:44.554 real 0m18.582s 00:16:44.554 user 0m42.658s 00:16:44.554 sys 0m7.459s 00:16:44.554 06:57:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:44.554 06:57:05 -- common/autotest_common.sh@10 -- # set +x 00:16:44.554 ************************************ 00:16:44.554 END TEST nvmf_connect_stress 00:16:44.554 ************************************ 00:16:44.554 06:57:05 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:44.554 06:57:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:44.554 06:57:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.554 06:57:05 -- common/autotest_common.sh@10 -- # set +x 00:16:44.554 ************************************ 00:16:44.554 START TEST nvmf_fused_ordering 00:16:44.554 ************************************ 00:16:44.554 06:57:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:44.554 * Looking for test storage... 00:16:44.554 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:44.554 06:57:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:44.554 06:57:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:44.554 06:57:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:44.554 06:57:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:44.554 06:57:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:44.554 06:57:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:44.554 06:57:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:44.554 06:57:06 -- scripts/common.sh@335 -- # IFS=.-: 00:16:44.554 06:57:06 -- scripts/common.sh@335 -- # read -ra ver1 00:16:44.554 06:57:06 -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.554 06:57:06 -- scripts/common.sh@336 -- # read -ra ver2 00:16:44.554 06:57:06 -- scripts/common.sh@337 -- # local 'op=<' 00:16:44.554 06:57:06 -- scripts/common.sh@339 -- # ver1_l=2 00:16:44.554 06:57:06 -- scripts/common.sh@340 -- # ver2_l=1 00:16:44.554 06:57:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:44.554 06:57:06 -- scripts/common.sh@343 -- # case "$op" in 00:16:44.554 06:57:06 -- scripts/common.sh@344 -- # : 1 00:16:44.554 06:57:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:44.554 06:57:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.554 06:57:06 -- scripts/common.sh@364 -- # decimal 1 00:16:44.554 06:57:06 -- scripts/common.sh@352 -- # local d=1 00:16:44.554 06:57:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.554 06:57:06 -- scripts/common.sh@354 -- # echo 1 00:16:44.554 06:57:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:44.554 06:57:06 -- scripts/common.sh@365 -- # decimal 2 00:16:44.554 06:57:06 -- scripts/common.sh@352 -- # local d=2 00:16:44.554 06:57:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.554 06:57:06 -- scripts/common.sh@354 -- # echo 2 00:16:44.554 06:57:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:44.554 06:57:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:44.554 06:57:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:44.554 06:57:06 -- scripts/common.sh@367 -- # return 0 00:16:44.554 06:57:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.554 06:57:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.554 --rc genhtml_branch_coverage=1 00:16:44.554 --rc genhtml_function_coverage=1 00:16:44.554 --rc genhtml_legend=1 00:16:44.554 --rc geninfo_all_blocks=1 00:16:44.554 --rc geninfo_unexecuted_blocks=1 00:16:44.554 00:16:44.554 ' 00:16:44.554 06:57:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.554 --rc genhtml_branch_coverage=1 00:16:44.554 --rc genhtml_function_coverage=1 00:16:44.554 --rc genhtml_legend=1 00:16:44.554 --rc geninfo_all_blocks=1 00:16:44.554 --rc geninfo_unexecuted_blocks=1 00:16:44.554 00:16:44.554 ' 00:16:44.554 06:57:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.554 --rc genhtml_branch_coverage=1 00:16:44.554 --rc genhtml_function_coverage=1 00:16:44.554 --rc genhtml_legend=1 00:16:44.554 --rc geninfo_all_blocks=1 00:16:44.554 --rc geninfo_unexecuted_blocks=1 00:16:44.554 00:16:44.554 ' 00:16:44.554 06:57:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.554 --rc genhtml_branch_coverage=1 00:16:44.554 --rc genhtml_function_coverage=1 00:16:44.554 --rc genhtml_legend=1 00:16:44.554 --rc geninfo_all_blocks=1 00:16:44.554 --rc geninfo_unexecuted_blocks=1 00:16:44.554 00:16:44.554 ' 00:16:44.554 06:57:06 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.554 06:57:06 -- nvmf/common.sh@7 -- # uname -s 00:16:44.554 06:57:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.554 06:57:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.554 06:57:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.554 06:57:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.554 06:57:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.554 06:57:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.554 06:57:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.555 06:57:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.555 06:57:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.555 06:57:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.555 06:57:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:44.555 06:57:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:44.555 06:57:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.555 06:57:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.555 06:57:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.555 06:57:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:44.555 06:57:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.555 06:57:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.555 06:57:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.555 06:57:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.555 06:57:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.555 06:57:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.555 06:57:06 -- paths/export.sh@5 -- # export PATH 00:16:44.555 06:57:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.555 06:57:06 -- nvmf/common.sh@46 -- # : 0 00:16:44.555 06:57:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:44.555 06:57:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:44.555 06:57:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:44.555 06:57:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.555 06:57:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.555 06:57:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:44.555 06:57:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:44.555 06:57:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:44.555 06:57:06 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:44.555 06:57:06 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:44.555 06:57:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.555 06:57:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:44.555 06:57:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:44.555 06:57:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:44.555 06:57:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.555 06:57:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.555 06:57:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.555 06:57:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:44.555 06:57:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:44.555 06:57:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:44.555 06:57:06 -- common/autotest_common.sh@10 -- # set +x 00:16:51.127 06:57:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:51.127 06:57:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:51.127 06:57:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:51.127 06:57:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:51.127 06:57:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:51.127 06:57:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:51.127 06:57:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:51.127 06:57:12 -- nvmf/common.sh@294 -- # net_devs=() 00:16:51.127 06:57:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:51.127 06:57:12 -- nvmf/common.sh@295 -- # e810=() 00:16:51.127 06:57:12 -- nvmf/common.sh@295 -- # local -ga e810 00:16:51.127 06:57:12 -- nvmf/common.sh@296 -- # x722=() 00:16:51.127 06:57:12 -- nvmf/common.sh@296 -- # local -ga x722 00:16:51.127 06:57:12 -- nvmf/common.sh@297 -- # mlx=() 00:16:51.127 06:57:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:51.127 06:57:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.127 06:57:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.127 06:57:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.127 06:57:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.127 06:57:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.127 06:57:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.127 06:57:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.127 06:57:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.127 06:57:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.127 06:57:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.127 06:57:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.127 06:57:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:51.127 06:57:12 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:51.127 06:57:12 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:51.127 06:57:12 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:51.127 06:57:12 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:51.127 06:57:12 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:51.127 06:57:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:51.127 06:57:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:51.127 06:57:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:51.127 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:51.127 06:57:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:51.127 06:57:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:51.127 06:57:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:51.127 06:57:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:51.127 06:57:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:51.127 06:57:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:51.127 06:57:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:51.127 06:57:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:51.127 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:51.127 06:57:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:51.127 06:57:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:51.127 06:57:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:51.128 06:57:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:51.128 06:57:12 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:51.128 06:57:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.128 06:57:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:51.128 06:57:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.128 06:57:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:51.128 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:51.128 06:57:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.128 06:57:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:51.128 06:57:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.128 06:57:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:51.128 06:57:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.128 06:57:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:51.128 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:51.128 06:57:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.128 06:57:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:51.128 06:57:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:51.128 06:57:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:51.128 06:57:12 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:51.128 06:57:12 -- nvmf/common.sh@57 -- # uname 00:16:51.128 06:57:12 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:51.128 06:57:12 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:51.128 06:57:12 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:51.128 06:57:12 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:51.128 06:57:12 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:51.128 06:57:12 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:51.128 06:57:12 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:51.128 06:57:12 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:51.128 06:57:12 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:51.128 06:57:12 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:51.128 06:57:12 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:51.128 06:57:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:51.128 06:57:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:51.128 06:57:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:51.128 06:57:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:51.128 06:57:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:51.128 06:57:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:51.128 06:57:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.128 06:57:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:51.128 06:57:12 -- nvmf/common.sh@104 -- # continue 2 00:16:51.128 06:57:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:51.128 06:57:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.128 06:57:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.128 06:57:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:51.128 06:57:12 -- nvmf/common.sh@104 -- # continue 2 00:16:51.128 06:57:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:51.128 06:57:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:51.128 06:57:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:51.128 06:57:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:51.128 06:57:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:51.128 06:57:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:51.128 06:57:12 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:51.128 06:57:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:51.128 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:51.128 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:51.128 altname enp217s0f0np0 00:16:51.128 altname ens818f0np0 00:16:51.128 inet 192.168.100.8/24 scope global mlx_0_0 00:16:51.128 valid_lft forever preferred_lft forever 00:16:51.128 06:57:12 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:51.128 06:57:12 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:51.128 06:57:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:51.128 06:57:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:51.128 06:57:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:51.128 06:57:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:51.128 06:57:12 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:51.128 06:57:12 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:51.128 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:51.128 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:51.128 altname enp217s0f1np1 00:16:51.128 altname ens818f1np1 00:16:51.128 inet 192.168.100.9/24 scope global mlx_0_1 00:16:51.128 valid_lft forever preferred_lft forever 00:16:51.128 06:57:12 -- nvmf/common.sh@410 -- # return 0 00:16:51.128 06:57:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:51.128 06:57:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:51.128 06:57:12 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:51.128 06:57:12 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:51.128 06:57:12 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:51.128 06:57:12 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:51.128 06:57:12 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:51.128 06:57:12 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:51.128 06:57:12 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:51.128 06:57:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:51.128 06:57:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.128 06:57:12 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:51.128 06:57:12 -- nvmf/common.sh@104 -- # continue 2 00:16:51.128 06:57:12 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:51.128 06:57:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.128 06:57:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.128 06:57:12 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:51.128 06:57:12 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:51.128 06:57:12 -- nvmf/common.sh@104 -- # continue 2 00:16:51.128 06:57:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:51.128 06:57:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:51.128 06:57:12 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:51.128 06:57:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:51.128 06:57:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:51.128 06:57:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:51.128 06:57:12 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:51.128 06:57:12 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:51.128 06:57:12 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:51.128 06:57:12 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:51.128 06:57:12 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:51.128 06:57:12 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:51.128 06:57:12 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:51.128 192.168.100.9' 00:16:51.128 06:57:12 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:51.128 192.168.100.9' 00:16:51.128 06:57:12 -- nvmf/common.sh@445 -- # head -n 1 00:16:51.128 06:57:12 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:51.128 06:57:12 -- nvmf/common.sh@446 -- # tail -n +2 00:16:51.128 06:57:12 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:51.128 192.168.100.9' 00:16:51.128 06:57:12 -- nvmf/common.sh@446 -- # head -n 1 00:16:51.128 06:57:12 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:51.128 06:57:12 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:51.128 06:57:12 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:51.128 06:57:12 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:51.128 06:57:12 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:51.128 06:57:12 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:51.388 06:57:12 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:51.388 06:57:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:51.388 06:57:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.388 06:57:12 -- common/autotest_common.sh@10 -- # set +x 00:16:51.388 06:57:12 -- nvmf/common.sh@469 -- # nvmfpid=1326350 00:16:51.388 06:57:12 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:51.388 06:57:12 -- nvmf/common.sh@470 -- # waitforlisten 1326350 00:16:51.388 06:57:12 -- common/autotest_common.sh@829 -- # '[' -z 1326350 ']' 00:16:51.388 06:57:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.388 06:57:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.388 06:57:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.388 06:57:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.388 06:57:12 -- common/autotest_common.sh@10 -- # set +x 00:16:51.388 [2024-12-15 06:57:12.823227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:51.388 [2024-12-15 06:57:12.823273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.388 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.388 [2024-12-15 06:57:12.893545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.388 [2024-12-15 06:57:12.928536] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:51.388 [2024-12-15 06:57:12.928645] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.388 [2024-12-15 06:57:12.928655] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.388 [2024-12-15 06:57:12.928663] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.388 [2024-12-15 06:57:12.928683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.326 06:57:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.326 06:57:13 -- common/autotest_common.sh@862 -- # return 0 00:16:52.326 06:57:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:52.326 06:57:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:52.326 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:52.326 06:57:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.326 06:57:13 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:52.326 06:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.326 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:52.326 [2024-12-15 06:57:13.702711] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x179e550/0x17a2a00) succeed. 00:16:52.326 [2024-12-15 06:57:13.711989] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x179fa00/0x17e40a0) succeed. 00:16:52.326 06:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.326 06:57:13 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:52.326 06:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.326 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:52.326 06:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.326 06:57:13 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:52.326 06:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.326 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:52.326 [2024-12-15 06:57:13.773612] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:52.326 06:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.326 06:57:13 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:52.326 06:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.326 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:52.326 NULL1 00:16:52.326 06:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.326 06:57:13 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:52.326 06:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.326 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:52.326 06:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.326 06:57:13 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:52.326 06:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.326 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:52.326 06:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.326 06:57:13 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:52.327 [2024-12-15 06:57:13.827641] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:52.327 [2024-12-15 06:57:13.827675] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326628 ] 00:16:52.327 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.586 Attached to nqn.2016-06.io.spdk:cnode1 00:16:52.586 Namespace ID: 1 size: 1GB 00:16:52.586 fused_ordering(0) 00:16:52.586 fused_ordering(1) 00:16:52.586 fused_ordering(2) 00:16:52.586 fused_ordering(3) 00:16:52.586 fused_ordering(4) 00:16:52.586 fused_ordering(5) 00:16:52.586 fused_ordering(6) 00:16:52.586 fused_ordering(7) 00:16:52.586 fused_ordering(8) 00:16:52.586 fused_ordering(9) 00:16:52.586 fused_ordering(10) 00:16:52.586 fused_ordering(11) 00:16:52.586 fused_ordering(12) 00:16:52.586 fused_ordering(13) 00:16:52.586 fused_ordering(14) 00:16:52.586 fused_ordering(15) 00:16:52.586 fused_ordering(16) 00:16:52.586 fused_ordering(17) 00:16:52.586 fused_ordering(18) 00:16:52.586 fused_ordering(19) 00:16:52.586 fused_ordering(20) 00:16:52.586 fused_ordering(21) 00:16:52.586 fused_ordering(22) 00:16:52.586 fused_ordering(23) 00:16:52.586 fused_ordering(24) 00:16:52.586 fused_ordering(25) 00:16:52.586 fused_ordering(26) 00:16:52.586 fused_ordering(27) 00:16:52.586 fused_ordering(28) 00:16:52.586 fused_ordering(29) 00:16:52.586 fused_ordering(30) 00:16:52.586 fused_ordering(31) 00:16:52.586 fused_ordering(32) 00:16:52.586 fused_ordering(33) 00:16:52.586 fused_ordering(34) 00:16:52.586 fused_ordering(35) 00:16:52.586 fused_ordering(36) 00:16:52.586 fused_ordering(37) 00:16:52.586 fused_ordering(38) 00:16:52.586 fused_ordering(39) 00:16:52.586 fused_ordering(40) 00:16:52.586 fused_ordering(41) 00:16:52.586 fused_ordering(42) 00:16:52.586 fused_ordering(43) 00:16:52.586 fused_ordering(44) 00:16:52.586 fused_ordering(45) 00:16:52.586 fused_ordering(46) 00:16:52.586 fused_ordering(47) 00:16:52.586 fused_ordering(48) 00:16:52.586 fused_ordering(49) 00:16:52.586 fused_ordering(50) 00:16:52.586 fused_ordering(51) 00:16:52.586 fused_ordering(52) 00:16:52.586 fused_ordering(53) 00:16:52.586 fused_ordering(54) 00:16:52.586 fused_ordering(55) 00:16:52.586 fused_ordering(56) 00:16:52.586 fused_ordering(57) 00:16:52.586 fused_ordering(58) 00:16:52.586 fused_ordering(59) 00:16:52.586 fused_ordering(60) 00:16:52.586 fused_ordering(61) 00:16:52.586 fused_ordering(62) 00:16:52.586 fused_ordering(63) 00:16:52.586 fused_ordering(64) 00:16:52.586 fused_ordering(65) 00:16:52.586 fused_ordering(66) 00:16:52.586 fused_ordering(67) 00:16:52.586 fused_ordering(68) 00:16:52.586 fused_ordering(69) 00:16:52.586 fused_ordering(70) 00:16:52.586 fused_ordering(71) 00:16:52.586 fused_ordering(72) 00:16:52.586 fused_ordering(73) 00:16:52.586 fused_ordering(74) 00:16:52.587 fused_ordering(75) 00:16:52.587 fused_ordering(76) 00:16:52.587 fused_ordering(77) 00:16:52.587 fused_ordering(78) 00:16:52.587 fused_ordering(79) 00:16:52.587 fused_ordering(80) 00:16:52.587 fused_ordering(81) 00:16:52.587 fused_ordering(82) 00:16:52.587 fused_ordering(83) 00:16:52.587 fused_ordering(84) 00:16:52.587 fused_ordering(85) 00:16:52.587 fused_ordering(86) 00:16:52.587 fused_ordering(87) 00:16:52.587 fused_ordering(88) 00:16:52.587 fused_ordering(89) 00:16:52.587 fused_ordering(90) 00:16:52.587 fused_ordering(91) 00:16:52.587 fused_ordering(92) 00:16:52.587 fused_ordering(93) 00:16:52.587 fused_ordering(94) 00:16:52.587 fused_ordering(95) 00:16:52.587 fused_ordering(96) 00:16:52.587 fused_ordering(97) 00:16:52.587 fused_ordering(98) 00:16:52.587 fused_ordering(99) 00:16:52.587 fused_ordering(100) 00:16:52.587 fused_ordering(101) 00:16:52.587 fused_ordering(102) 00:16:52.587 fused_ordering(103) 00:16:52.587 fused_ordering(104) 00:16:52.587 fused_ordering(105) 00:16:52.587 fused_ordering(106) 00:16:52.587 fused_ordering(107) 00:16:52.587 fused_ordering(108) 00:16:52.587 fused_ordering(109) 00:16:52.587 fused_ordering(110) 00:16:52.587 fused_ordering(111) 00:16:52.587 fused_ordering(112) 00:16:52.587 fused_ordering(113) 00:16:52.587 fused_ordering(114) 00:16:52.587 fused_ordering(115) 00:16:52.587 fused_ordering(116) 00:16:52.587 fused_ordering(117) 00:16:52.587 fused_ordering(118) 00:16:52.587 fused_ordering(119) 00:16:52.587 fused_ordering(120) 00:16:52.587 fused_ordering(121) 00:16:52.587 fused_ordering(122) 00:16:52.587 fused_ordering(123) 00:16:52.587 fused_ordering(124) 00:16:52.587 fused_ordering(125) 00:16:52.587 fused_ordering(126) 00:16:52.587 fused_ordering(127) 00:16:52.587 fused_ordering(128) 00:16:52.587 fused_ordering(129) 00:16:52.587 fused_ordering(130) 00:16:52.587 fused_ordering(131) 00:16:52.587 fused_ordering(132) 00:16:52.587 fused_ordering(133) 00:16:52.587 fused_ordering(134) 00:16:52.587 fused_ordering(135) 00:16:52.587 fused_ordering(136) 00:16:52.587 fused_ordering(137) 00:16:52.587 fused_ordering(138) 00:16:52.587 fused_ordering(139) 00:16:52.587 fused_ordering(140) 00:16:52.587 fused_ordering(141) 00:16:52.587 fused_ordering(142) 00:16:52.587 fused_ordering(143) 00:16:52.587 fused_ordering(144) 00:16:52.587 fused_ordering(145) 00:16:52.587 fused_ordering(146) 00:16:52.587 fused_ordering(147) 00:16:52.587 fused_ordering(148) 00:16:52.587 fused_ordering(149) 00:16:52.587 fused_ordering(150) 00:16:52.587 fused_ordering(151) 00:16:52.587 fused_ordering(152) 00:16:52.587 fused_ordering(153) 00:16:52.587 fused_ordering(154) 00:16:52.587 fused_ordering(155) 00:16:52.587 fused_ordering(156) 00:16:52.587 fused_ordering(157) 00:16:52.587 fused_ordering(158) 00:16:52.587 fused_ordering(159) 00:16:52.587 fused_ordering(160) 00:16:52.587 fused_ordering(161) 00:16:52.587 fused_ordering(162) 00:16:52.587 fused_ordering(163) 00:16:52.587 fused_ordering(164) 00:16:52.587 fused_ordering(165) 00:16:52.587 fused_ordering(166) 00:16:52.587 fused_ordering(167) 00:16:52.587 fused_ordering(168) 00:16:52.587 fused_ordering(169) 00:16:52.587 fused_ordering(170) 00:16:52.587 fused_ordering(171) 00:16:52.587 fused_ordering(172) 00:16:52.587 fused_ordering(173) 00:16:52.587 fused_ordering(174) 00:16:52.587 fused_ordering(175) 00:16:52.587 fused_ordering(176) 00:16:52.587 fused_ordering(177) 00:16:52.587 fused_ordering(178) 00:16:52.587 fused_ordering(179) 00:16:52.587 fused_ordering(180) 00:16:52.587 fused_ordering(181) 00:16:52.587 fused_ordering(182) 00:16:52.587 fused_ordering(183) 00:16:52.587 fused_ordering(184) 00:16:52.587 fused_ordering(185) 00:16:52.587 fused_ordering(186) 00:16:52.587 fused_ordering(187) 00:16:52.587 fused_ordering(188) 00:16:52.587 fused_ordering(189) 00:16:52.587 fused_ordering(190) 00:16:52.587 fused_ordering(191) 00:16:52.587 fused_ordering(192) 00:16:52.587 fused_ordering(193) 00:16:52.587 fused_ordering(194) 00:16:52.587 fused_ordering(195) 00:16:52.587 fused_ordering(196) 00:16:52.587 fused_ordering(197) 00:16:52.587 fused_ordering(198) 00:16:52.587 fused_ordering(199) 00:16:52.587 fused_ordering(200) 00:16:52.587 fused_ordering(201) 00:16:52.587 fused_ordering(202) 00:16:52.587 fused_ordering(203) 00:16:52.587 fused_ordering(204) 00:16:52.587 fused_ordering(205) 00:16:52.587 fused_ordering(206) 00:16:52.587 fused_ordering(207) 00:16:52.587 fused_ordering(208) 00:16:52.587 fused_ordering(209) 00:16:52.587 fused_ordering(210) 00:16:52.587 fused_ordering(211) 00:16:52.587 fused_ordering(212) 00:16:52.587 fused_ordering(213) 00:16:52.587 fused_ordering(214) 00:16:52.587 fused_ordering(215) 00:16:52.587 fused_ordering(216) 00:16:52.587 fused_ordering(217) 00:16:52.587 fused_ordering(218) 00:16:52.587 fused_ordering(219) 00:16:52.587 fused_ordering(220) 00:16:52.587 fused_ordering(221) 00:16:52.587 fused_ordering(222) 00:16:52.587 fused_ordering(223) 00:16:52.587 fused_ordering(224) 00:16:52.587 fused_ordering(225) 00:16:52.587 fused_ordering(226) 00:16:52.587 fused_ordering(227) 00:16:52.587 fused_ordering(228) 00:16:52.587 fused_ordering(229) 00:16:52.587 fused_ordering(230) 00:16:52.587 fused_ordering(231) 00:16:52.587 fused_ordering(232) 00:16:52.587 fused_ordering(233) 00:16:52.587 fused_ordering(234) 00:16:52.587 fused_ordering(235) 00:16:52.587 fused_ordering(236) 00:16:52.587 fused_ordering(237) 00:16:52.587 fused_ordering(238) 00:16:52.587 fused_ordering(239) 00:16:52.587 fused_ordering(240) 00:16:52.587 fused_ordering(241) 00:16:52.587 fused_ordering(242) 00:16:52.587 fused_ordering(243) 00:16:52.587 fused_ordering(244) 00:16:52.587 fused_ordering(245) 00:16:52.587 fused_ordering(246) 00:16:52.587 fused_ordering(247) 00:16:52.587 fused_ordering(248) 00:16:52.587 fused_ordering(249) 00:16:52.587 fused_ordering(250) 00:16:52.587 fused_ordering(251) 00:16:52.587 fused_ordering(252) 00:16:52.587 fused_ordering(253) 00:16:52.587 fused_ordering(254) 00:16:52.587 fused_ordering(255) 00:16:52.587 fused_ordering(256) 00:16:52.587 fused_ordering(257) 00:16:52.587 fused_ordering(258) 00:16:52.587 fused_ordering(259) 00:16:52.587 fused_ordering(260) 00:16:52.587 fused_ordering(261) 00:16:52.587 fused_ordering(262) 00:16:52.587 fused_ordering(263) 00:16:52.587 fused_ordering(264) 00:16:52.587 fused_ordering(265) 00:16:52.587 fused_ordering(266) 00:16:52.587 fused_ordering(267) 00:16:52.587 fused_ordering(268) 00:16:52.587 fused_ordering(269) 00:16:52.587 fused_ordering(270) 00:16:52.587 fused_ordering(271) 00:16:52.587 fused_ordering(272) 00:16:52.587 fused_ordering(273) 00:16:52.587 fused_ordering(274) 00:16:52.587 fused_ordering(275) 00:16:52.587 fused_ordering(276) 00:16:52.587 fused_ordering(277) 00:16:52.587 fused_ordering(278) 00:16:52.587 fused_ordering(279) 00:16:52.587 fused_ordering(280) 00:16:52.587 fused_ordering(281) 00:16:52.587 fused_ordering(282) 00:16:52.587 fused_ordering(283) 00:16:52.587 fused_ordering(284) 00:16:52.587 fused_ordering(285) 00:16:52.587 fused_ordering(286) 00:16:52.587 fused_ordering(287) 00:16:52.587 fused_ordering(288) 00:16:52.587 fused_ordering(289) 00:16:52.587 fused_ordering(290) 00:16:52.587 fused_ordering(291) 00:16:52.587 fused_ordering(292) 00:16:52.587 fused_ordering(293) 00:16:52.587 fused_ordering(294) 00:16:52.587 fused_ordering(295) 00:16:52.587 fused_ordering(296) 00:16:52.587 fused_ordering(297) 00:16:52.587 fused_ordering(298) 00:16:52.587 fused_ordering(299) 00:16:52.587 fused_ordering(300) 00:16:52.587 fused_ordering(301) 00:16:52.587 fused_ordering(302) 00:16:52.587 fused_ordering(303) 00:16:52.587 fused_ordering(304) 00:16:52.587 fused_ordering(305) 00:16:52.587 fused_ordering(306) 00:16:52.587 fused_ordering(307) 00:16:52.587 fused_ordering(308) 00:16:52.587 fused_ordering(309) 00:16:52.587 fused_ordering(310) 00:16:52.587 fused_ordering(311) 00:16:52.587 fused_ordering(312) 00:16:52.587 fused_ordering(313) 00:16:52.587 fused_ordering(314) 00:16:52.587 fused_ordering(315) 00:16:52.587 fused_ordering(316) 00:16:52.587 fused_ordering(317) 00:16:52.587 fused_ordering(318) 00:16:52.587 fused_ordering(319) 00:16:52.587 fused_ordering(320) 00:16:52.587 fused_ordering(321) 00:16:52.587 fused_ordering(322) 00:16:52.587 fused_ordering(323) 00:16:52.587 fused_ordering(324) 00:16:52.587 fused_ordering(325) 00:16:52.587 fused_ordering(326) 00:16:52.587 fused_ordering(327) 00:16:52.587 fused_ordering(328) 00:16:52.587 fused_ordering(329) 00:16:52.587 fused_ordering(330) 00:16:52.587 fused_ordering(331) 00:16:52.587 fused_ordering(332) 00:16:52.587 fused_ordering(333) 00:16:52.587 fused_ordering(334) 00:16:52.587 fused_ordering(335) 00:16:52.587 fused_ordering(336) 00:16:52.587 fused_ordering(337) 00:16:52.587 fused_ordering(338) 00:16:52.587 fused_ordering(339) 00:16:52.587 fused_ordering(340) 00:16:52.587 fused_ordering(341) 00:16:52.587 fused_ordering(342) 00:16:52.587 fused_ordering(343) 00:16:52.587 fused_ordering(344) 00:16:52.587 fused_ordering(345) 00:16:52.587 fused_ordering(346) 00:16:52.587 fused_ordering(347) 00:16:52.587 fused_ordering(348) 00:16:52.587 fused_ordering(349) 00:16:52.587 fused_ordering(350) 00:16:52.587 fused_ordering(351) 00:16:52.587 fused_ordering(352) 00:16:52.587 fused_ordering(353) 00:16:52.587 fused_ordering(354) 00:16:52.587 fused_ordering(355) 00:16:52.587 fused_ordering(356) 00:16:52.587 fused_ordering(357) 00:16:52.587 fused_ordering(358) 00:16:52.587 fused_ordering(359) 00:16:52.587 fused_ordering(360) 00:16:52.587 fused_ordering(361) 00:16:52.588 fused_ordering(362) 00:16:52.588 fused_ordering(363) 00:16:52.588 fused_ordering(364) 00:16:52.588 fused_ordering(365) 00:16:52.588 fused_ordering(366) 00:16:52.588 fused_ordering(367) 00:16:52.588 fused_ordering(368) 00:16:52.588 fused_ordering(369) 00:16:52.588 fused_ordering(370) 00:16:52.588 fused_ordering(371) 00:16:52.588 fused_ordering(372) 00:16:52.588 fused_ordering(373) 00:16:52.588 fused_ordering(374) 00:16:52.588 fused_ordering(375) 00:16:52.588 fused_ordering(376) 00:16:52.588 fused_ordering(377) 00:16:52.588 fused_ordering(378) 00:16:52.588 fused_ordering(379) 00:16:52.588 fused_ordering(380) 00:16:52.588 fused_ordering(381) 00:16:52.588 fused_ordering(382) 00:16:52.588 fused_ordering(383) 00:16:52.588 fused_ordering(384) 00:16:52.588 fused_ordering(385) 00:16:52.588 fused_ordering(386) 00:16:52.588 fused_ordering(387) 00:16:52.588 fused_ordering(388) 00:16:52.588 fused_ordering(389) 00:16:52.588 fused_ordering(390) 00:16:52.588 fused_ordering(391) 00:16:52.588 fused_ordering(392) 00:16:52.588 fused_ordering(393) 00:16:52.588 fused_ordering(394) 00:16:52.588 fused_ordering(395) 00:16:52.588 fused_ordering(396) 00:16:52.588 fused_ordering(397) 00:16:52.588 fused_ordering(398) 00:16:52.588 fused_ordering(399) 00:16:52.588 fused_ordering(400) 00:16:52.588 fused_ordering(401) 00:16:52.588 fused_ordering(402) 00:16:52.588 fused_ordering(403) 00:16:52.588 fused_ordering(404) 00:16:52.588 fused_ordering(405) 00:16:52.588 fused_ordering(406) 00:16:52.588 fused_ordering(407) 00:16:52.588 fused_ordering(408) 00:16:52.588 fused_ordering(409) 00:16:52.588 fused_ordering(410) 00:16:52.588 fused_ordering(411) 00:16:52.588 fused_ordering(412) 00:16:52.588 fused_ordering(413) 00:16:52.588 fused_ordering(414) 00:16:52.588 fused_ordering(415) 00:16:52.588 fused_ordering(416) 00:16:52.588 fused_ordering(417) 00:16:52.588 fused_ordering(418) 00:16:52.588 fused_ordering(419) 00:16:52.588 fused_ordering(420) 00:16:52.588 fused_ordering(421) 00:16:52.588 fused_ordering(422) 00:16:52.588 fused_ordering(423) 00:16:52.588 fused_ordering(424) 00:16:52.588 fused_ordering(425) 00:16:52.588 fused_ordering(426) 00:16:52.588 fused_ordering(427) 00:16:52.588 fused_ordering(428) 00:16:52.588 fused_ordering(429) 00:16:52.588 fused_ordering(430) 00:16:52.588 fused_ordering(431) 00:16:52.588 fused_ordering(432) 00:16:52.588 fused_ordering(433) 00:16:52.588 fused_ordering(434) 00:16:52.588 fused_ordering(435) 00:16:52.588 fused_ordering(436) 00:16:52.588 fused_ordering(437) 00:16:52.588 fused_ordering(438) 00:16:52.588 fused_ordering(439) 00:16:52.588 fused_ordering(440) 00:16:52.588 fused_ordering(441) 00:16:52.588 fused_ordering(442) 00:16:52.588 fused_ordering(443) 00:16:52.588 fused_ordering(444) 00:16:52.588 fused_ordering(445) 00:16:52.588 fused_ordering(446) 00:16:52.588 fused_ordering(447) 00:16:52.588 fused_ordering(448) 00:16:52.588 fused_ordering(449) 00:16:52.588 fused_ordering(450) 00:16:52.588 fused_ordering(451) 00:16:52.588 fused_ordering(452) 00:16:52.588 fused_ordering(453) 00:16:52.588 fused_ordering(454) 00:16:52.588 fused_ordering(455) 00:16:52.588 fused_ordering(456) 00:16:52.588 fused_ordering(457) 00:16:52.588 fused_ordering(458) 00:16:52.588 fused_ordering(459) 00:16:52.588 fused_ordering(460) 00:16:52.588 fused_ordering(461) 00:16:52.588 fused_ordering(462) 00:16:52.588 fused_ordering(463) 00:16:52.588 fused_ordering(464) 00:16:52.588 fused_ordering(465) 00:16:52.588 fused_ordering(466) 00:16:52.588 fused_ordering(467) 00:16:52.588 fused_ordering(468) 00:16:52.588 fused_ordering(469) 00:16:52.588 fused_ordering(470) 00:16:52.588 fused_ordering(471) 00:16:52.588 fused_ordering(472) 00:16:52.588 fused_ordering(473) 00:16:52.588 fused_ordering(474) 00:16:52.588 fused_ordering(475) 00:16:52.588 fused_ordering(476) 00:16:52.588 fused_ordering(477) 00:16:52.588 fused_ordering(478) 00:16:52.588 fused_ordering(479) 00:16:52.588 fused_ordering(480) 00:16:52.588 fused_ordering(481) 00:16:52.588 fused_ordering(482) 00:16:52.588 fused_ordering(483) 00:16:52.588 fused_ordering(484) 00:16:52.588 fused_ordering(485) 00:16:52.588 fused_ordering(486) 00:16:52.588 fused_ordering(487) 00:16:52.588 fused_ordering(488) 00:16:52.588 fused_ordering(489) 00:16:52.588 fused_ordering(490) 00:16:52.588 fused_ordering(491) 00:16:52.588 fused_ordering(492) 00:16:52.588 fused_ordering(493) 00:16:52.588 fused_ordering(494) 00:16:52.588 fused_ordering(495) 00:16:52.588 fused_ordering(496) 00:16:52.588 fused_ordering(497) 00:16:52.588 fused_ordering(498) 00:16:52.588 fused_ordering(499) 00:16:52.588 fused_ordering(500) 00:16:52.588 fused_ordering(501) 00:16:52.588 fused_ordering(502) 00:16:52.588 fused_ordering(503) 00:16:52.588 fused_ordering(504) 00:16:52.588 fused_ordering(505) 00:16:52.588 fused_ordering(506) 00:16:52.588 fused_ordering(507) 00:16:52.588 fused_ordering(508) 00:16:52.588 fused_ordering(509) 00:16:52.588 fused_ordering(510) 00:16:52.588 fused_ordering(511) 00:16:52.588 fused_ordering(512) 00:16:52.588 fused_ordering(513) 00:16:52.588 fused_ordering(514) 00:16:52.588 fused_ordering(515) 00:16:52.588 fused_ordering(516) 00:16:52.588 fused_ordering(517) 00:16:52.588 fused_ordering(518) 00:16:52.588 fused_ordering(519) 00:16:52.588 fused_ordering(520) 00:16:52.588 fused_ordering(521) 00:16:52.588 fused_ordering(522) 00:16:52.588 fused_ordering(523) 00:16:52.588 fused_ordering(524) 00:16:52.588 fused_ordering(525) 00:16:52.588 fused_ordering(526) 00:16:52.588 fused_ordering(527) 00:16:52.588 fused_ordering(528) 00:16:52.588 fused_ordering(529) 00:16:52.588 fused_ordering(530) 00:16:52.588 fused_ordering(531) 00:16:52.588 fused_ordering(532) 00:16:52.588 fused_ordering(533) 00:16:52.588 fused_ordering(534) 00:16:52.588 fused_ordering(535) 00:16:52.588 fused_ordering(536) 00:16:52.588 fused_ordering(537) 00:16:52.588 fused_ordering(538) 00:16:52.588 fused_ordering(539) 00:16:52.588 fused_ordering(540) 00:16:52.588 fused_ordering(541) 00:16:52.588 fused_ordering(542) 00:16:52.588 fused_ordering(543) 00:16:52.588 fused_ordering(544) 00:16:52.588 fused_ordering(545) 00:16:52.588 fused_ordering(546) 00:16:52.588 fused_ordering(547) 00:16:52.588 fused_ordering(548) 00:16:52.588 fused_ordering(549) 00:16:52.588 fused_ordering(550) 00:16:52.588 fused_ordering(551) 00:16:52.588 fused_ordering(552) 00:16:52.588 fused_ordering(553) 00:16:52.588 fused_ordering(554) 00:16:52.588 fused_ordering(555) 00:16:52.588 fused_ordering(556) 00:16:52.588 fused_ordering(557) 00:16:52.588 fused_ordering(558) 00:16:52.588 fused_ordering(559) 00:16:52.588 fused_ordering(560) 00:16:52.588 fused_ordering(561) 00:16:52.588 fused_ordering(562) 00:16:52.588 fused_ordering(563) 00:16:52.588 fused_ordering(564) 00:16:52.588 fused_ordering(565) 00:16:52.588 fused_ordering(566) 00:16:52.588 fused_ordering(567) 00:16:52.588 fused_ordering(568) 00:16:52.588 fused_ordering(569) 00:16:52.588 fused_ordering(570) 00:16:52.588 fused_ordering(571) 00:16:52.588 fused_ordering(572) 00:16:52.588 fused_ordering(573) 00:16:52.588 fused_ordering(574) 00:16:52.588 fused_ordering(575) 00:16:52.588 fused_ordering(576) 00:16:52.588 fused_ordering(577) 00:16:52.588 fused_ordering(578) 00:16:52.588 fused_ordering(579) 00:16:52.588 fused_ordering(580) 00:16:52.588 fused_ordering(581) 00:16:52.588 fused_ordering(582) 00:16:52.588 fused_ordering(583) 00:16:52.588 fused_ordering(584) 00:16:52.588 fused_ordering(585) 00:16:52.588 fused_ordering(586) 00:16:52.588 fused_ordering(587) 00:16:52.588 fused_ordering(588) 00:16:52.588 fused_ordering(589) 00:16:52.588 fused_ordering(590) 00:16:52.588 fused_ordering(591) 00:16:52.588 fused_ordering(592) 00:16:52.588 fused_ordering(593) 00:16:52.588 fused_ordering(594) 00:16:52.588 fused_ordering(595) 00:16:52.588 fused_ordering(596) 00:16:52.588 fused_ordering(597) 00:16:52.588 fused_ordering(598) 00:16:52.588 fused_ordering(599) 00:16:52.588 fused_ordering(600) 00:16:52.588 fused_ordering(601) 00:16:52.588 fused_ordering(602) 00:16:52.588 fused_ordering(603) 00:16:52.588 fused_ordering(604) 00:16:52.588 fused_ordering(605) 00:16:52.588 fused_ordering(606) 00:16:52.588 fused_ordering(607) 00:16:52.588 fused_ordering(608) 00:16:52.588 fused_ordering(609) 00:16:52.588 fused_ordering(610) 00:16:52.588 fused_ordering(611) 00:16:52.588 fused_ordering(612) 00:16:52.588 fused_ordering(613) 00:16:52.588 fused_ordering(614) 00:16:52.588 fused_ordering(615) 00:16:52.848 fused_ordering(616) 00:16:52.848 fused_ordering(617) 00:16:52.848 fused_ordering(618) 00:16:52.848 fused_ordering(619) 00:16:52.848 fused_ordering(620) 00:16:52.848 fused_ordering(621) 00:16:52.848 fused_ordering(622) 00:16:52.848 fused_ordering(623) 00:16:52.848 fused_ordering(624) 00:16:52.848 fused_ordering(625) 00:16:52.848 fused_ordering(626) 00:16:52.848 fused_ordering(627) 00:16:52.848 fused_ordering(628) 00:16:52.848 fused_ordering(629) 00:16:52.848 fused_ordering(630) 00:16:52.848 fused_ordering(631) 00:16:52.848 fused_ordering(632) 00:16:52.848 fused_ordering(633) 00:16:52.848 fused_ordering(634) 00:16:52.848 fused_ordering(635) 00:16:52.848 fused_ordering(636) 00:16:52.848 fused_ordering(637) 00:16:52.848 fused_ordering(638) 00:16:52.848 fused_ordering(639) 00:16:52.848 fused_ordering(640) 00:16:52.848 fused_ordering(641) 00:16:52.848 fused_ordering(642) 00:16:52.848 fused_ordering(643) 00:16:52.848 fused_ordering(644) 00:16:52.848 fused_ordering(645) 00:16:52.848 fused_ordering(646) 00:16:52.848 fused_ordering(647) 00:16:52.848 fused_ordering(648) 00:16:52.848 fused_ordering(649) 00:16:52.848 fused_ordering(650) 00:16:52.848 fused_ordering(651) 00:16:52.848 fused_ordering(652) 00:16:52.848 fused_ordering(653) 00:16:52.848 fused_ordering(654) 00:16:52.848 fused_ordering(655) 00:16:52.848 fused_ordering(656) 00:16:52.848 fused_ordering(657) 00:16:52.848 fused_ordering(658) 00:16:52.848 fused_ordering(659) 00:16:52.848 fused_ordering(660) 00:16:52.848 fused_ordering(661) 00:16:52.848 fused_ordering(662) 00:16:52.848 fused_ordering(663) 00:16:52.848 fused_ordering(664) 00:16:52.848 fused_ordering(665) 00:16:52.848 fused_ordering(666) 00:16:52.848 fused_ordering(667) 00:16:52.848 fused_ordering(668) 00:16:52.848 fused_ordering(669) 00:16:52.848 fused_ordering(670) 00:16:52.848 fused_ordering(671) 00:16:52.848 fused_ordering(672) 00:16:52.848 fused_ordering(673) 00:16:52.848 fused_ordering(674) 00:16:52.848 fused_ordering(675) 00:16:52.848 fused_ordering(676) 00:16:52.848 fused_ordering(677) 00:16:52.848 fused_ordering(678) 00:16:52.848 fused_ordering(679) 00:16:52.848 fused_ordering(680) 00:16:52.848 fused_ordering(681) 00:16:52.848 fused_ordering(682) 00:16:52.848 fused_ordering(683) 00:16:52.848 fused_ordering(684) 00:16:52.848 fused_ordering(685) 00:16:52.848 fused_ordering(686) 00:16:52.848 fused_ordering(687) 00:16:52.848 fused_ordering(688) 00:16:52.848 fused_ordering(689) 00:16:52.848 fused_ordering(690) 00:16:52.848 fused_ordering(691) 00:16:52.848 fused_ordering(692) 00:16:52.848 fused_ordering(693) 00:16:52.848 fused_ordering(694) 00:16:52.848 fused_ordering(695) 00:16:52.848 fused_ordering(696) 00:16:52.848 fused_ordering(697) 00:16:52.848 fused_ordering(698) 00:16:52.848 fused_ordering(699) 00:16:52.848 fused_ordering(700) 00:16:52.848 fused_ordering(701) 00:16:52.848 fused_ordering(702) 00:16:52.848 fused_ordering(703) 00:16:52.848 fused_ordering(704) 00:16:52.848 fused_ordering(705) 00:16:52.848 fused_ordering(706) 00:16:52.848 fused_ordering(707) 00:16:52.848 fused_ordering(708) 00:16:52.848 fused_ordering(709) 00:16:52.848 fused_ordering(710) 00:16:52.848 fused_ordering(711) 00:16:52.848 fused_ordering(712) 00:16:52.848 fused_ordering(713) 00:16:52.848 fused_ordering(714) 00:16:52.848 fused_ordering(715) 00:16:52.848 fused_ordering(716) 00:16:52.848 fused_ordering(717) 00:16:52.848 fused_ordering(718) 00:16:52.848 fused_ordering(719) 00:16:52.848 fused_ordering(720) 00:16:52.848 fused_ordering(721) 00:16:52.848 fused_ordering(722) 00:16:52.848 fused_ordering(723) 00:16:52.848 fused_ordering(724) 00:16:52.848 fused_ordering(725) 00:16:52.848 fused_ordering(726) 00:16:52.848 fused_ordering(727) 00:16:52.848 fused_ordering(728) 00:16:52.848 fused_ordering(729) 00:16:52.848 fused_ordering(730) 00:16:52.848 fused_ordering(731) 00:16:52.848 fused_ordering(732) 00:16:52.848 fused_ordering(733) 00:16:52.848 fused_ordering(734) 00:16:52.848 fused_ordering(735) 00:16:52.848 fused_ordering(736) 00:16:52.848 fused_ordering(737) 00:16:52.848 fused_ordering(738) 00:16:52.848 fused_ordering(739) 00:16:52.848 fused_ordering(740) 00:16:52.848 fused_ordering(741) 00:16:52.848 fused_ordering(742) 00:16:52.848 fused_ordering(743) 00:16:52.848 fused_ordering(744) 00:16:52.848 fused_ordering(745) 00:16:52.848 fused_ordering(746) 00:16:52.848 fused_ordering(747) 00:16:52.848 fused_ordering(748) 00:16:52.848 fused_ordering(749) 00:16:52.848 fused_ordering(750) 00:16:52.848 fused_ordering(751) 00:16:52.848 fused_ordering(752) 00:16:52.848 fused_ordering(753) 00:16:52.848 fused_ordering(754) 00:16:52.848 fused_ordering(755) 00:16:52.848 fused_ordering(756) 00:16:52.848 fused_ordering(757) 00:16:52.848 fused_ordering(758) 00:16:52.848 fused_ordering(759) 00:16:52.848 fused_ordering(760) 00:16:52.848 fused_ordering(761) 00:16:52.848 fused_ordering(762) 00:16:52.848 fused_ordering(763) 00:16:52.848 fused_ordering(764) 00:16:52.848 fused_ordering(765) 00:16:52.848 fused_ordering(766) 00:16:52.848 fused_ordering(767) 00:16:52.848 fused_ordering(768) 00:16:52.848 fused_ordering(769) 00:16:52.848 fused_ordering(770) 00:16:52.848 fused_ordering(771) 00:16:52.848 fused_ordering(772) 00:16:52.848 fused_ordering(773) 00:16:52.848 fused_ordering(774) 00:16:52.848 fused_ordering(775) 00:16:52.848 fused_ordering(776) 00:16:52.848 fused_ordering(777) 00:16:52.849 fused_ordering(778) 00:16:52.849 fused_ordering(779) 00:16:52.849 fused_ordering(780) 00:16:52.849 fused_ordering(781) 00:16:52.849 fused_ordering(782) 00:16:52.849 fused_ordering(783) 00:16:52.849 fused_ordering(784) 00:16:52.849 fused_ordering(785) 00:16:52.849 fused_ordering(786) 00:16:52.849 fused_ordering(787) 00:16:52.849 fused_ordering(788) 00:16:52.849 fused_ordering(789) 00:16:52.849 fused_ordering(790) 00:16:52.849 fused_ordering(791) 00:16:52.849 fused_ordering(792) 00:16:52.849 fused_ordering(793) 00:16:52.849 fused_ordering(794) 00:16:52.849 fused_ordering(795) 00:16:52.849 fused_ordering(796) 00:16:52.849 fused_ordering(797) 00:16:52.849 fused_ordering(798) 00:16:52.849 fused_ordering(799) 00:16:52.849 fused_ordering(800) 00:16:52.849 fused_ordering(801) 00:16:52.849 fused_ordering(802) 00:16:52.849 fused_ordering(803) 00:16:52.849 fused_ordering(804) 00:16:52.849 fused_ordering(805) 00:16:52.849 fused_ordering(806) 00:16:52.849 fused_ordering(807) 00:16:52.849 fused_ordering(808) 00:16:52.849 fused_ordering(809) 00:16:52.849 fused_ordering(810) 00:16:52.849 fused_ordering(811) 00:16:52.849 fused_ordering(812) 00:16:52.849 fused_ordering(813) 00:16:52.849 fused_ordering(814) 00:16:52.849 fused_ordering(815) 00:16:52.849 fused_ordering(816) 00:16:52.849 fused_ordering(817) 00:16:52.849 fused_ordering(818) 00:16:52.849 fused_ordering(819) 00:16:52.849 fused_ordering(820) 00:16:53.109 fused_ordering(821) 00:16:53.109 fused_ordering(822) 00:16:53.109 fused_ordering(823) 00:16:53.109 fused_ordering(824) 00:16:53.109 fused_ordering(825) 00:16:53.109 fused_ordering(826) 00:16:53.109 fused_ordering(827) 00:16:53.109 fused_ordering(828) 00:16:53.109 fused_ordering(829) 00:16:53.109 fused_ordering(830) 00:16:53.109 fused_ordering(831) 00:16:53.109 fused_ordering(832) 00:16:53.109 fused_ordering(833) 00:16:53.109 fused_ordering(834) 00:16:53.109 fused_ordering(835) 00:16:53.109 fused_ordering(836) 00:16:53.109 fused_ordering(837) 00:16:53.109 fused_ordering(838) 00:16:53.109 fused_ordering(839) 00:16:53.109 fused_ordering(840) 00:16:53.109 fused_ordering(841) 00:16:53.109 fused_ordering(842) 00:16:53.109 fused_ordering(843) 00:16:53.109 fused_ordering(844) 00:16:53.109 fused_ordering(845) 00:16:53.109 fused_ordering(846) 00:16:53.109 fused_ordering(847) 00:16:53.109 fused_ordering(848) 00:16:53.109 fused_ordering(849) 00:16:53.109 fused_ordering(850) 00:16:53.109 fused_ordering(851) 00:16:53.109 fused_ordering(852) 00:16:53.109 fused_ordering(853) 00:16:53.109 fused_ordering(854) 00:16:53.109 fused_ordering(855) 00:16:53.109 fused_ordering(856) 00:16:53.109 fused_ordering(857) 00:16:53.109 fused_ordering(858) 00:16:53.109 fused_ordering(859) 00:16:53.109 fused_ordering(860) 00:16:53.109 fused_ordering(861) 00:16:53.109 fused_ordering(862) 00:16:53.109 fused_ordering(863) 00:16:53.109 fused_ordering(864) 00:16:53.109 fused_ordering(865) 00:16:53.109 fused_ordering(866) 00:16:53.109 fused_ordering(867) 00:16:53.109 fused_ordering(868) 00:16:53.109 fused_ordering(869) 00:16:53.109 fused_ordering(870) 00:16:53.109 fused_ordering(871) 00:16:53.109 fused_ordering(872) 00:16:53.109 fused_ordering(873) 00:16:53.109 fused_ordering(874) 00:16:53.109 fused_ordering(875) 00:16:53.109 fused_ordering(876) 00:16:53.109 fused_ordering(877) 00:16:53.109 fused_ordering(878) 00:16:53.109 fused_ordering(879) 00:16:53.109 fused_ordering(880) 00:16:53.109 fused_ordering(881) 00:16:53.109 fused_ordering(882) 00:16:53.109 fused_ordering(883) 00:16:53.109 fused_ordering(884) 00:16:53.109 fused_ordering(885) 00:16:53.109 fused_ordering(886) 00:16:53.109 fused_ordering(887) 00:16:53.109 fused_ordering(888) 00:16:53.109 fused_ordering(889) 00:16:53.109 fused_ordering(890) 00:16:53.109 fused_ordering(891) 00:16:53.109 fused_ordering(892) 00:16:53.109 fused_ordering(893) 00:16:53.109 fused_ordering(894) 00:16:53.109 fused_ordering(895) 00:16:53.109 fused_ordering(896) 00:16:53.109 fused_ordering(897) 00:16:53.109 fused_ordering(898) 00:16:53.109 fused_ordering(899) 00:16:53.109 fused_ordering(900) 00:16:53.109 fused_ordering(901) 00:16:53.109 fused_ordering(902) 00:16:53.109 fused_ordering(903) 00:16:53.109 fused_ordering(904) 00:16:53.109 fused_ordering(905) 00:16:53.109 fused_ordering(906) 00:16:53.109 fused_ordering(907) 00:16:53.109 fused_ordering(908) 00:16:53.109 fused_ordering(909) 00:16:53.109 fused_ordering(910) 00:16:53.109 fused_ordering(911) 00:16:53.109 fused_ordering(912) 00:16:53.109 fused_ordering(913) 00:16:53.109 fused_ordering(914) 00:16:53.109 fused_ordering(915) 00:16:53.109 fused_ordering(916) 00:16:53.109 fused_ordering(917) 00:16:53.109 fused_ordering(918) 00:16:53.109 fused_ordering(919) 00:16:53.110 fused_ordering(920) 00:16:53.110 fused_ordering(921) 00:16:53.110 fused_ordering(922) 00:16:53.110 fused_ordering(923) 00:16:53.110 fused_ordering(924) 00:16:53.110 fused_ordering(925) 00:16:53.110 fused_ordering(926) 00:16:53.110 fused_ordering(927) 00:16:53.110 fused_ordering(928) 00:16:53.110 fused_ordering(929) 00:16:53.110 fused_ordering(930) 00:16:53.110 fused_ordering(931) 00:16:53.110 fused_ordering(932) 00:16:53.110 fused_ordering(933) 00:16:53.110 fused_ordering(934) 00:16:53.110 fused_ordering(935) 00:16:53.110 fused_ordering(936) 00:16:53.110 fused_ordering(937) 00:16:53.110 fused_ordering(938) 00:16:53.110 fused_ordering(939) 00:16:53.110 fused_ordering(940) 00:16:53.110 fused_ordering(941) 00:16:53.110 fused_ordering(942) 00:16:53.110 fused_ordering(943) 00:16:53.110 fused_ordering(944) 00:16:53.110 fused_ordering(945) 00:16:53.110 fused_ordering(946) 00:16:53.110 fused_ordering(947) 00:16:53.110 fused_ordering(948) 00:16:53.110 fused_ordering(949) 00:16:53.110 fused_ordering(950) 00:16:53.110 fused_ordering(951) 00:16:53.110 fused_ordering(952) 00:16:53.110 fused_ordering(953) 00:16:53.110 fused_ordering(954) 00:16:53.110 fused_ordering(955) 00:16:53.110 fused_ordering(956) 00:16:53.110 fused_ordering(957) 00:16:53.110 fused_ordering(958) 00:16:53.110 fused_ordering(959) 00:16:53.110 fused_ordering(960) 00:16:53.110 fused_ordering(961) 00:16:53.110 fused_ordering(962) 00:16:53.110 fused_ordering(963) 00:16:53.110 fused_ordering(964) 00:16:53.110 fused_ordering(965) 00:16:53.110 fused_ordering(966) 00:16:53.110 fused_ordering(967) 00:16:53.110 fused_ordering(968) 00:16:53.110 fused_ordering(969) 00:16:53.110 fused_ordering(970) 00:16:53.110 fused_ordering(971) 00:16:53.110 fused_ordering(972) 00:16:53.110 fused_ordering(973) 00:16:53.110 fused_ordering(974) 00:16:53.110 fused_ordering(975) 00:16:53.110 fused_ordering(976) 00:16:53.110 fused_ordering(977) 00:16:53.110 fused_ordering(978) 00:16:53.110 fused_ordering(979) 00:16:53.110 fused_ordering(980) 00:16:53.110 fused_ordering(981) 00:16:53.110 fused_ordering(982) 00:16:53.110 fused_ordering(983) 00:16:53.110 fused_ordering(984) 00:16:53.110 fused_ordering(985) 00:16:53.110 fused_ordering(986) 00:16:53.110 fused_ordering(987) 00:16:53.110 fused_ordering(988) 00:16:53.110 fused_ordering(989) 00:16:53.110 fused_ordering(990) 00:16:53.110 fused_ordering(991) 00:16:53.110 fused_ordering(992) 00:16:53.110 fused_ordering(993) 00:16:53.110 fused_ordering(994) 00:16:53.110 fused_ordering(995) 00:16:53.110 fused_ordering(996) 00:16:53.110 fused_ordering(997) 00:16:53.110 fused_ordering(998) 00:16:53.110 fused_ordering(999) 00:16:53.110 fused_ordering(1000) 00:16:53.110 fused_ordering(1001) 00:16:53.110 fused_ordering(1002) 00:16:53.110 fused_ordering(1003) 00:16:53.110 fused_ordering(1004) 00:16:53.110 fused_ordering(1005) 00:16:53.110 fused_ordering(1006) 00:16:53.110 fused_ordering(1007) 00:16:53.110 fused_ordering(1008) 00:16:53.110 fused_ordering(1009) 00:16:53.110 fused_ordering(1010) 00:16:53.110 fused_ordering(1011) 00:16:53.110 fused_ordering(1012) 00:16:53.110 fused_ordering(1013) 00:16:53.110 fused_ordering(1014) 00:16:53.110 fused_ordering(1015) 00:16:53.110 fused_ordering(1016) 00:16:53.110 fused_ordering(1017) 00:16:53.110 fused_ordering(1018) 00:16:53.110 fused_ordering(1019) 00:16:53.110 fused_ordering(1020) 00:16:53.110 fused_ordering(1021) 00:16:53.110 fused_ordering(1022) 00:16:53.110 fused_ordering(1023) 00:16:53.110 06:57:14 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:53.110 06:57:14 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:53.110 06:57:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:53.110 06:57:14 -- nvmf/common.sh@116 -- # sync 00:16:53.110 06:57:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:53.110 06:57:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:53.110 06:57:14 -- nvmf/common.sh@119 -- # set +e 00:16:53.110 06:57:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:53.110 06:57:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:53.110 rmmod nvme_rdma 00:16:53.110 rmmod nvme_fabrics 00:16:53.110 06:57:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:53.110 06:57:14 -- nvmf/common.sh@123 -- # set -e 00:16:53.110 06:57:14 -- nvmf/common.sh@124 -- # return 0 00:16:53.110 06:57:14 -- nvmf/common.sh@477 -- # '[' -n 1326350 ']' 00:16:53.110 06:57:14 -- nvmf/common.sh@478 -- # killprocess 1326350 00:16:53.110 06:57:14 -- common/autotest_common.sh@936 -- # '[' -z 1326350 ']' 00:16:53.110 06:57:14 -- common/autotest_common.sh@940 -- # kill -0 1326350 00:16:53.110 06:57:14 -- common/autotest_common.sh@941 -- # uname 00:16:53.110 06:57:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.110 06:57:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1326350 00:16:53.110 06:57:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:53.110 06:57:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:53.110 06:57:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1326350' 00:16:53.110 killing process with pid 1326350 00:16:53.110 06:57:14 -- common/autotest_common.sh@955 -- # kill 1326350 00:16:53.110 06:57:14 -- common/autotest_common.sh@960 -- # wait 1326350 00:16:53.370 06:57:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:53.370 06:57:14 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:53.370 00:16:53.370 real 0m8.946s 00:16:53.370 user 0m4.713s 00:16:53.370 sys 0m5.596s 00:16:53.370 06:57:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:53.370 06:57:14 -- common/autotest_common.sh@10 -- # set +x 00:16:53.370 ************************************ 00:16:53.370 END TEST nvmf_fused_ordering 00:16:53.370 ************************************ 00:16:53.370 06:57:14 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:53.370 06:57:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:53.370 06:57:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:53.370 06:57:14 -- common/autotest_common.sh@10 -- # set +x 00:16:53.370 ************************************ 00:16:53.370 START TEST nvmf_delete_subsystem 00:16:53.370 ************************************ 00:16:53.370 06:57:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:53.370 * Looking for test storage... 00:16:53.370 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:53.370 06:57:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:53.370 06:57:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:53.370 06:57:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:53.630 06:57:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:53.630 06:57:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:53.630 06:57:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:53.630 06:57:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:53.630 06:57:15 -- scripts/common.sh@335 -- # IFS=.-: 00:16:53.630 06:57:15 -- scripts/common.sh@335 -- # read -ra ver1 00:16:53.630 06:57:15 -- scripts/common.sh@336 -- # IFS=.-: 00:16:53.630 06:57:15 -- scripts/common.sh@336 -- # read -ra ver2 00:16:53.630 06:57:15 -- scripts/common.sh@337 -- # local 'op=<' 00:16:53.631 06:57:15 -- scripts/common.sh@339 -- # ver1_l=2 00:16:53.631 06:57:15 -- scripts/common.sh@340 -- # ver2_l=1 00:16:53.631 06:57:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:53.631 06:57:15 -- scripts/common.sh@343 -- # case "$op" in 00:16:53.631 06:57:15 -- scripts/common.sh@344 -- # : 1 00:16:53.631 06:57:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:53.631 06:57:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:53.631 06:57:15 -- scripts/common.sh@364 -- # decimal 1 00:16:53.631 06:57:15 -- scripts/common.sh@352 -- # local d=1 00:16:53.631 06:57:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:53.631 06:57:15 -- scripts/common.sh@354 -- # echo 1 00:16:53.631 06:57:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:53.631 06:57:15 -- scripts/common.sh@365 -- # decimal 2 00:16:53.631 06:57:15 -- scripts/common.sh@352 -- # local d=2 00:16:53.631 06:57:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:53.631 06:57:15 -- scripts/common.sh@354 -- # echo 2 00:16:53.631 06:57:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:53.631 06:57:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:53.631 06:57:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:53.631 06:57:15 -- scripts/common.sh@367 -- # return 0 00:16:53.631 06:57:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:53.631 06:57:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:53.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.631 --rc genhtml_branch_coverage=1 00:16:53.631 --rc genhtml_function_coverage=1 00:16:53.631 --rc genhtml_legend=1 00:16:53.631 --rc geninfo_all_blocks=1 00:16:53.631 --rc geninfo_unexecuted_blocks=1 00:16:53.631 00:16:53.631 ' 00:16:53.631 06:57:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:53.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.631 --rc genhtml_branch_coverage=1 00:16:53.631 --rc genhtml_function_coverage=1 00:16:53.631 --rc genhtml_legend=1 00:16:53.631 --rc geninfo_all_blocks=1 00:16:53.631 --rc geninfo_unexecuted_blocks=1 00:16:53.631 00:16:53.631 ' 00:16:53.631 06:57:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:53.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.631 --rc genhtml_branch_coverage=1 00:16:53.631 --rc genhtml_function_coverage=1 00:16:53.631 --rc genhtml_legend=1 00:16:53.631 --rc geninfo_all_blocks=1 00:16:53.631 --rc geninfo_unexecuted_blocks=1 00:16:53.631 00:16:53.631 ' 00:16:53.631 06:57:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:53.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.631 --rc genhtml_branch_coverage=1 00:16:53.631 --rc genhtml_function_coverage=1 00:16:53.631 --rc genhtml_legend=1 00:16:53.631 --rc geninfo_all_blocks=1 00:16:53.631 --rc geninfo_unexecuted_blocks=1 00:16:53.631 00:16:53.631 ' 00:16:53.631 06:57:15 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.631 06:57:15 -- nvmf/common.sh@7 -- # uname -s 00:16:53.631 06:57:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.631 06:57:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.631 06:57:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.631 06:57:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.631 06:57:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.631 06:57:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.631 06:57:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.631 06:57:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.631 06:57:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.631 06:57:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.631 06:57:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:53.631 06:57:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:53.631 06:57:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.631 06:57:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.631 06:57:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.631 06:57:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:53.631 06:57:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.631 06:57:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.631 06:57:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.631 06:57:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.631 06:57:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.631 06:57:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.631 06:57:15 -- paths/export.sh@5 -- # export PATH 00:16:53.631 06:57:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.631 06:57:15 -- nvmf/common.sh@46 -- # : 0 00:16:53.631 06:57:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:53.631 06:57:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:53.631 06:57:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:53.631 06:57:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.631 06:57:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.631 06:57:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:53.631 06:57:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:53.631 06:57:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:53.631 06:57:15 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:53.631 06:57:15 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:53.631 06:57:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.631 06:57:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:53.631 06:57:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:53.631 06:57:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:53.631 06:57:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.631 06:57:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.631 06:57:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.631 06:57:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:53.631 06:57:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:53.631 06:57:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:53.631 06:57:15 -- common/autotest_common.sh@10 -- # set +x 00:17:00.203 06:57:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:00.203 06:57:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:00.203 06:57:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:00.203 06:57:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:00.203 06:57:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:00.203 06:57:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:00.203 06:57:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:00.203 06:57:21 -- nvmf/common.sh@294 -- # net_devs=() 00:17:00.203 06:57:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:00.203 06:57:21 -- nvmf/common.sh@295 -- # e810=() 00:17:00.203 06:57:21 -- nvmf/common.sh@295 -- # local -ga e810 00:17:00.203 06:57:21 -- nvmf/common.sh@296 -- # x722=() 00:17:00.203 06:57:21 -- nvmf/common.sh@296 -- # local -ga x722 00:17:00.203 06:57:21 -- nvmf/common.sh@297 -- # mlx=() 00:17:00.203 06:57:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:00.203 06:57:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:00.203 06:57:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:00.203 06:57:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:00.203 06:57:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:00.203 06:57:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:00.203 06:57:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:00.203 06:57:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:00.203 06:57:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:00.203 06:57:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:00.203 06:57:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:00.203 06:57:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:00.203 06:57:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:00.203 06:57:21 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:00.203 06:57:21 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:00.203 06:57:21 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:00.203 06:57:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:00.203 06:57:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:00.203 06:57:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:00.203 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:00.203 06:57:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:00.203 06:57:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:00.203 06:57:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:00.203 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:00.203 06:57:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:00.203 06:57:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:00.203 06:57:21 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:00.203 06:57:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.203 06:57:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:00.203 06:57:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.203 06:57:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:00.203 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:00.203 06:57:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.203 06:57:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:00.203 06:57:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.203 06:57:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:00.203 06:57:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.203 06:57:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:00.203 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:00.203 06:57:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.203 06:57:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:00.203 06:57:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:00.203 06:57:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:00.203 06:57:21 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:00.203 06:57:21 -- nvmf/common.sh@57 -- # uname 00:17:00.203 06:57:21 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:00.203 06:57:21 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:00.203 06:57:21 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:00.203 06:57:21 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:00.203 06:57:21 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:00.203 06:57:21 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:00.203 06:57:21 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:00.203 06:57:21 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:00.203 06:57:21 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:00.203 06:57:21 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:00.203 06:57:21 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:00.203 06:57:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:00.203 06:57:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:00.203 06:57:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:00.203 06:57:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:00.203 06:57:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:00.203 06:57:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:00.203 06:57:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.203 06:57:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:00.203 06:57:21 -- nvmf/common.sh@104 -- # continue 2 00:17:00.203 06:57:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:00.203 06:57:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.203 06:57:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.203 06:57:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:00.203 06:57:21 -- nvmf/common.sh@104 -- # continue 2 00:17:00.203 06:57:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:00.203 06:57:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:00.203 06:57:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:00.203 06:57:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:00.203 06:57:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:00.203 06:57:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:00.203 06:57:21 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:00.203 06:57:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:00.203 06:57:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:00.203 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:00.204 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:00.204 altname enp217s0f0np0 00:17:00.204 altname ens818f0np0 00:17:00.204 inet 192.168.100.8/24 scope global mlx_0_0 00:17:00.204 valid_lft forever preferred_lft forever 00:17:00.204 06:57:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:00.204 06:57:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:00.204 06:57:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:00.204 06:57:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:00.204 06:57:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:00.204 06:57:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:00.204 06:57:21 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:00.204 06:57:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:00.204 06:57:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:00.204 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:00.204 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:00.204 altname enp217s0f1np1 00:17:00.204 altname ens818f1np1 00:17:00.204 inet 192.168.100.9/24 scope global mlx_0_1 00:17:00.204 valid_lft forever preferred_lft forever 00:17:00.204 06:57:21 -- nvmf/common.sh@410 -- # return 0 00:17:00.204 06:57:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:00.204 06:57:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:00.204 06:57:21 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:00.204 06:57:21 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:00.204 06:57:21 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:00.204 06:57:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:00.204 06:57:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:00.204 06:57:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:00.204 06:57:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:00.204 06:57:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:00.204 06:57:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:00.204 06:57:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.204 06:57:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:00.204 06:57:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:00.204 06:57:21 -- nvmf/common.sh@104 -- # continue 2 00:17:00.204 06:57:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:00.204 06:57:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.204 06:57:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:00.204 06:57:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:00.204 06:57:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:00.204 06:57:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:00.204 06:57:21 -- nvmf/common.sh@104 -- # continue 2 00:17:00.204 06:57:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:00.204 06:57:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:00.204 06:57:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:00.204 06:57:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:00.204 06:57:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:00.204 06:57:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:00.204 06:57:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:00.204 06:57:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:00.204 06:57:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:00.204 06:57:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:00.204 06:57:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:00.204 06:57:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:00.204 06:57:21 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:00.204 192.168.100.9' 00:17:00.204 06:57:21 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:00.204 192.168.100.9' 00:17:00.204 06:57:21 -- nvmf/common.sh@445 -- # head -n 1 00:17:00.204 06:57:21 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:00.204 06:57:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:00.204 192.168.100.9' 00:17:00.204 06:57:21 -- nvmf/common.sh@446 -- # tail -n +2 00:17:00.204 06:57:21 -- nvmf/common.sh@446 -- # head -n 1 00:17:00.204 06:57:21 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:00.204 06:57:21 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:00.204 06:57:21 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:00.204 06:57:21 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:00.204 06:57:21 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:00.204 06:57:21 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:00.204 06:57:21 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:00.204 06:57:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:00.204 06:57:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:00.204 06:57:21 -- common/autotest_common.sh@10 -- # set +x 00:17:00.204 06:57:21 -- nvmf/common.sh@469 -- # nvmfpid=1330038 00:17:00.204 06:57:21 -- nvmf/common.sh@470 -- # waitforlisten 1330038 00:17:00.204 06:57:21 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:00.204 06:57:21 -- common/autotest_common.sh@829 -- # '[' -z 1330038 ']' 00:17:00.204 06:57:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.204 06:57:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.204 06:57:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.204 06:57:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.204 06:57:21 -- common/autotest_common.sh@10 -- # set +x 00:17:00.204 [2024-12-15 06:57:21.749213] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:00.204 [2024-12-15 06:57:21.749266] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.204 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.204 [2024-12-15 06:57:21.820854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:00.463 [2024-12-15 06:57:21.858183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:00.463 [2024-12-15 06:57:21.858288] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.463 [2024-12-15 06:57:21.858298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.463 [2024-12-15 06:57:21.858309] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.463 [2024-12-15 06:57:21.858350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.463 [2024-12-15 06:57:21.858352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.033 06:57:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.033 06:57:22 -- common/autotest_common.sh@862 -- # return 0 00:17:01.033 06:57:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:01.033 06:57:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:01.033 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:17:01.033 06:57:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.033 06:57:22 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:01.033 06:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.033 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:17:01.033 [2024-12-15 06:57:22.634736] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17c3b50/0x17c8000) succeed. 00:17:01.033 [2024-12-15 06:57:22.643581] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17c5000/0x18096a0) succeed. 00:17:01.292 06:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.292 06:57:22 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:01.292 06:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.292 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:17:01.292 06:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.292 06:57:22 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:01.292 06:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.292 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:17:01.292 [2024-12-15 06:57:22.725208] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:01.292 06:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.292 06:57:22 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:01.293 06:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.293 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:17:01.293 NULL1 00:17:01.293 06:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.293 06:57:22 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:01.293 06:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.293 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:17:01.293 Delay0 00:17:01.293 06:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.293 06:57:22 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:01.293 06:57:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.293 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:17:01.293 06:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.293 06:57:22 -- target/delete_subsystem.sh@28 -- # perf_pid=1330112 00:17:01.293 06:57:22 -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:01.293 06:57:22 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:01.293 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.293 [2024-12-15 06:57:22.827925] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:03.199 06:57:24 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.199 06:57:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.199 06:57:24 -- common/autotest_common.sh@10 -- # set +x 00:17:04.577 NVMe io qpair process completion error 00:17:04.577 NVMe io qpair process completion error 00:17:04.577 NVMe io qpair process completion error 00:17:04.577 NVMe io qpair process completion error 00:17:04.577 NVMe io qpair process completion error 00:17:04.577 NVMe io qpair process completion error 00:17:04.577 06:57:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.577 06:57:25 -- target/delete_subsystem.sh@34 -- # delay=0 00:17:04.577 06:57:25 -- target/delete_subsystem.sh@35 -- # kill -0 1330112 00:17:04.577 06:57:25 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:04.836 06:57:26 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:04.836 06:57:26 -- target/delete_subsystem.sh@35 -- # kill -0 1330112 00:17:04.836 06:57:26 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Write completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.405 Read completed with error (sct=0, sc=8) 00:17:05.405 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 starting I/O failed: -6 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 Read completed with error (sct=0, sc=8) 00:17:05.406 Write completed with error (sct=0, sc=8) 00:17:05.406 [2024-12-15 06:57:26.910657] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:17:05.406 06:57:26 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:05.406 06:57:26 -- target/delete_subsystem.sh@35 -- # kill -0 1330112 00:17:05.406 06:57:26 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:05.406 [2024-12-15 06:57:26.924347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:05.406 [2024-12-15 06:57:26.924365] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:05.406 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:05.406 Initializing NVMe Controllers 00:17:05.406 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:05.406 Controller IO queue size 128, less than required. 00:17:05.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:05.406 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:05.407 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:05.407 Initialization complete. Launching workers. 00:17:05.407 ======================================================== 00:17:05.407 Latency(us) 00:17:05.407 Device Information : IOPS MiB/s Average min max 00:17:05.407 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.45 0.04 1594220.25 1000267.04 2977611.01 00:17:05.407 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.45 0.04 1595609.77 1000799.65 2979187.18 00:17:05.407 ======================================================== 00:17:05.407 Total : 160.91 0.08 1594915.01 1000267.04 2979187.18 00:17:05.407 00:17:05.975 06:57:27 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:05.975 06:57:27 -- target/delete_subsystem.sh@35 -- # kill -0 1330112 00:17:05.975 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1330112) - No such process 00:17:05.975 06:57:27 -- target/delete_subsystem.sh@45 -- # NOT wait 1330112 00:17:05.975 06:57:27 -- common/autotest_common.sh@650 -- # local es=0 00:17:05.975 06:57:27 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1330112 00:17:05.975 06:57:27 -- common/autotest_common.sh@638 -- # local arg=wait 00:17:05.975 06:57:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.975 06:57:27 -- common/autotest_common.sh@642 -- # type -t wait 00:17:05.975 06:57:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.975 06:57:27 -- common/autotest_common.sh@653 -- # wait 1330112 00:17:05.975 06:57:27 -- common/autotest_common.sh@653 -- # es=1 00:17:05.975 06:57:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:05.975 06:57:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:05.975 06:57:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:05.975 06:57:27 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:05.975 06:57:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.975 06:57:27 -- common/autotest_common.sh@10 -- # set +x 00:17:05.975 06:57:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.975 06:57:27 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:05.975 06:57:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.975 06:57:27 -- common/autotest_common.sh@10 -- # set +x 00:17:05.975 [2024-12-15 06:57:27.444407] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:05.975 06:57:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.975 06:57:27 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:05.975 06:57:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.975 06:57:27 -- common/autotest_common.sh@10 -- # set +x 00:17:05.975 06:57:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.975 06:57:27 -- target/delete_subsystem.sh@54 -- # perf_pid=1330927 00:17:05.975 06:57:27 -- target/delete_subsystem.sh@56 -- # delay=0 00:17:05.975 06:57:27 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:05.975 06:57:27 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:05.975 06:57:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:05.975 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.975 [2024-12-15 06:57:27.533104] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:06.543 06:57:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:06.543 06:57:27 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:06.543 06:57:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:07.111 06:57:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:07.111 06:57:28 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:07.111 06:57:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:07.370 06:57:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:07.370 06:57:28 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:07.370 06:57:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:07.938 06:57:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:07.938 06:57:29 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:07.938 06:57:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:08.507 06:57:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:08.507 06:57:29 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:08.507 06:57:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:09.075 06:57:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:09.075 06:57:30 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:09.075 06:57:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:09.642 06:57:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:09.642 06:57:30 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:09.642 06:57:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:09.901 06:57:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:09.901 06:57:31 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:09.901 06:57:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:10.469 06:57:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:10.469 06:57:32 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:10.469 06:57:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:11.037 06:57:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:11.037 06:57:32 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:11.037 06:57:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:11.604 06:57:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:11.604 06:57:33 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:11.604 06:57:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:12.170 06:57:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:12.170 06:57:33 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:12.170 06:57:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:12.429 06:57:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:12.429 06:57:34 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:12.429 06:57:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:12.996 06:57:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:12.996 06:57:34 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:12.996 06:57:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:13.256 Initializing NVMe Controllers 00:17:13.256 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:13.256 Controller IO queue size 128, less than required. 00:17:13.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:13.256 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:13.256 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:13.256 Initialization complete. Launching workers. 00:17:13.256 ======================================================== 00:17:13.256 Latency(us) 00:17:13.256 Device Information : IOPS MiB/s Average min max 00:17:13.256 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001548.14 1000054.10 1004025.36 00:17:13.256 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002817.26 1000189.14 1006463.64 00:17:13.256 ======================================================== 00:17:13.256 Total : 256.00 0.12 1002182.70 1000054.10 1006463.64 00:17:13.256 00:17:13.515 06:57:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:13.515 06:57:35 -- target/delete_subsystem.sh@57 -- # kill -0 1330927 00:17:13.515 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1330927) - No such process 00:17:13.515 06:57:35 -- target/delete_subsystem.sh@67 -- # wait 1330927 00:17:13.515 06:57:35 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:13.515 06:57:35 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:13.515 06:57:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:13.515 06:57:35 -- nvmf/common.sh@116 -- # sync 00:17:13.515 06:57:35 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:13.515 06:57:35 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:13.515 06:57:35 -- nvmf/common.sh@119 -- # set +e 00:17:13.515 06:57:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:13.515 06:57:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:13.515 rmmod nvme_rdma 00:17:13.515 rmmod nvme_fabrics 00:17:13.515 06:57:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:13.515 06:57:35 -- nvmf/common.sh@123 -- # set -e 00:17:13.515 06:57:35 -- nvmf/common.sh@124 -- # return 0 00:17:13.515 06:57:35 -- nvmf/common.sh@477 -- # '[' -n 1330038 ']' 00:17:13.515 06:57:35 -- nvmf/common.sh@478 -- # killprocess 1330038 00:17:13.515 06:57:35 -- common/autotest_common.sh@936 -- # '[' -z 1330038 ']' 00:17:13.515 06:57:35 -- common/autotest_common.sh@940 -- # kill -0 1330038 00:17:13.515 06:57:35 -- common/autotest_common.sh@941 -- # uname 00:17:13.515 06:57:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.515 06:57:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1330038 00:17:13.515 06:57:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:13.515 06:57:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:13.515 06:57:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1330038' 00:17:13.515 killing process with pid 1330038 00:17:13.515 06:57:35 -- common/autotest_common.sh@955 -- # kill 1330038 00:17:13.515 06:57:35 -- common/autotest_common.sh@960 -- # wait 1330038 00:17:13.775 06:57:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:13.775 06:57:35 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:13.775 00:17:13.775 real 0m20.487s 00:17:13.775 user 0m50.139s 00:17:13.775 sys 0m6.288s 00:17:13.775 06:57:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:13.775 06:57:35 -- common/autotest_common.sh@10 -- # set +x 00:17:13.775 ************************************ 00:17:13.775 END TEST nvmf_delete_subsystem 00:17:13.775 ************************************ 00:17:13.775 06:57:35 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:17:13.775 06:57:35 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:13.775 06:57:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:13.775 06:57:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:13.775 06:57:35 -- common/autotest_common.sh@10 -- # set +x 00:17:13.775 ************************************ 00:17:13.775 START TEST nvmf_nvme_cli 00:17:13.775 ************************************ 00:17:13.775 06:57:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:14.035 * Looking for test storage... 00:17:14.035 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:14.035 06:57:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:14.035 06:57:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:14.035 06:57:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:14.035 06:57:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:14.035 06:57:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:14.035 06:57:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:14.035 06:57:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:14.035 06:57:35 -- scripts/common.sh@335 -- # IFS=.-: 00:17:14.035 06:57:35 -- scripts/common.sh@335 -- # read -ra ver1 00:17:14.035 06:57:35 -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.035 06:57:35 -- scripts/common.sh@336 -- # read -ra ver2 00:17:14.035 06:57:35 -- scripts/common.sh@337 -- # local 'op=<' 00:17:14.035 06:57:35 -- scripts/common.sh@339 -- # ver1_l=2 00:17:14.035 06:57:35 -- scripts/common.sh@340 -- # ver2_l=1 00:17:14.035 06:57:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:14.035 06:57:35 -- scripts/common.sh@343 -- # case "$op" in 00:17:14.035 06:57:35 -- scripts/common.sh@344 -- # : 1 00:17:14.035 06:57:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:14.035 06:57:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.035 06:57:35 -- scripts/common.sh@364 -- # decimal 1 00:17:14.035 06:57:35 -- scripts/common.sh@352 -- # local d=1 00:17:14.035 06:57:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.035 06:57:35 -- scripts/common.sh@354 -- # echo 1 00:17:14.035 06:57:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:14.035 06:57:35 -- scripts/common.sh@365 -- # decimal 2 00:17:14.035 06:57:35 -- scripts/common.sh@352 -- # local d=2 00:17:14.035 06:57:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.035 06:57:35 -- scripts/common.sh@354 -- # echo 2 00:17:14.035 06:57:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:14.035 06:57:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:14.035 06:57:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:14.035 06:57:35 -- scripts/common.sh@367 -- # return 0 00:17:14.035 06:57:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.035 06:57:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:14.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.035 --rc genhtml_branch_coverage=1 00:17:14.035 --rc genhtml_function_coverage=1 00:17:14.035 --rc genhtml_legend=1 00:17:14.035 --rc geninfo_all_blocks=1 00:17:14.035 --rc geninfo_unexecuted_blocks=1 00:17:14.035 00:17:14.035 ' 00:17:14.035 06:57:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:14.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.035 --rc genhtml_branch_coverage=1 00:17:14.035 --rc genhtml_function_coverage=1 00:17:14.035 --rc genhtml_legend=1 00:17:14.035 --rc geninfo_all_blocks=1 00:17:14.035 --rc geninfo_unexecuted_blocks=1 00:17:14.035 00:17:14.035 ' 00:17:14.035 06:57:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:14.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.035 --rc genhtml_branch_coverage=1 00:17:14.035 --rc genhtml_function_coverage=1 00:17:14.035 --rc genhtml_legend=1 00:17:14.035 --rc geninfo_all_blocks=1 00:17:14.035 --rc geninfo_unexecuted_blocks=1 00:17:14.035 00:17:14.035 ' 00:17:14.035 06:57:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:14.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.035 --rc genhtml_branch_coverage=1 00:17:14.035 --rc genhtml_function_coverage=1 00:17:14.035 --rc genhtml_legend=1 00:17:14.035 --rc geninfo_all_blocks=1 00:17:14.035 --rc geninfo_unexecuted_blocks=1 00:17:14.035 00:17:14.035 ' 00:17:14.035 06:57:35 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.035 06:57:35 -- nvmf/common.sh@7 -- # uname -s 00:17:14.035 06:57:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.035 06:57:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.035 06:57:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.035 06:57:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.035 06:57:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.035 06:57:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.035 06:57:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.035 06:57:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.035 06:57:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.035 06:57:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.035 06:57:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:14.035 06:57:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:14.035 06:57:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.035 06:57:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.035 06:57:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.035 06:57:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:14.035 06:57:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.035 06:57:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.035 06:57:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.035 06:57:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.035 06:57:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.035 06:57:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.035 06:57:35 -- paths/export.sh@5 -- # export PATH 00:17:14.035 06:57:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.035 06:57:35 -- nvmf/common.sh@46 -- # : 0 00:17:14.035 06:57:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:14.035 06:57:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:14.035 06:57:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:14.035 06:57:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.035 06:57:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.035 06:57:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:14.035 06:57:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:14.035 06:57:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:14.035 06:57:35 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:14.035 06:57:35 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:14.035 06:57:35 -- target/nvme_cli.sh@14 -- # devs=() 00:17:14.035 06:57:35 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:14.035 06:57:35 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:14.035 06:57:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.035 06:57:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:14.035 06:57:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:14.035 06:57:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:14.035 06:57:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.035 06:57:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.035 06:57:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.035 06:57:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:14.035 06:57:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:14.035 06:57:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:14.035 06:57:35 -- common/autotest_common.sh@10 -- # set +x 00:17:22.161 06:57:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:22.161 06:57:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:22.161 06:57:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:22.161 06:57:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:22.161 06:57:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:22.161 06:57:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:22.161 06:57:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:22.161 06:57:42 -- nvmf/common.sh@294 -- # net_devs=() 00:17:22.161 06:57:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:22.161 06:57:42 -- nvmf/common.sh@295 -- # e810=() 00:17:22.161 06:57:42 -- nvmf/common.sh@295 -- # local -ga e810 00:17:22.161 06:57:42 -- nvmf/common.sh@296 -- # x722=() 00:17:22.161 06:57:42 -- nvmf/common.sh@296 -- # local -ga x722 00:17:22.161 06:57:42 -- nvmf/common.sh@297 -- # mlx=() 00:17:22.161 06:57:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:22.161 06:57:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:22.161 06:57:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:22.161 06:57:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:22.161 06:57:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:22.161 06:57:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:22.161 06:57:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:22.161 06:57:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:22.161 06:57:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:22.161 06:57:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:22.161 06:57:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:22.161 06:57:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:22.161 06:57:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:22.161 06:57:42 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:22.161 06:57:42 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:22.161 06:57:42 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:22.161 06:57:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:22.161 06:57:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:22.161 06:57:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:22.161 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:22.161 06:57:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:22.161 06:57:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:22.161 06:57:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:22.161 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:22.161 06:57:42 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:22.161 06:57:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:22.161 06:57:42 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:22.161 06:57:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.161 06:57:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:22.161 06:57:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.161 06:57:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:22.161 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:22.161 06:57:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.161 06:57:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:22.161 06:57:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.161 06:57:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:22.161 06:57:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.161 06:57:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:22.161 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:22.161 06:57:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.161 06:57:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:22.161 06:57:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:22.161 06:57:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:22.161 06:57:42 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:22.161 06:57:42 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:22.162 06:57:42 -- nvmf/common.sh@57 -- # uname 00:17:22.162 06:57:42 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:22.162 06:57:42 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:22.162 06:57:42 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:22.162 06:57:42 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:22.162 06:57:42 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:22.162 06:57:42 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:22.162 06:57:42 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:22.162 06:57:42 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:22.162 06:57:42 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:22.162 06:57:42 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:22.162 06:57:42 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:22.162 06:57:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:22.162 06:57:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:22.162 06:57:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:22.162 06:57:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:22.162 06:57:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:22.162 06:57:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:22.162 06:57:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.162 06:57:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:22.162 06:57:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:22.162 06:57:42 -- nvmf/common.sh@104 -- # continue 2 00:17:22.162 06:57:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:22.162 06:57:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.162 06:57:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:22.162 06:57:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.162 06:57:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:22.162 06:57:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:22.162 06:57:42 -- nvmf/common.sh@104 -- # continue 2 00:17:22.162 06:57:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:22.162 06:57:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:22.162 06:57:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:22.162 06:57:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:22.162 06:57:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:22.162 06:57:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:22.162 06:57:42 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:22.162 06:57:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:22.162 06:57:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:22.162 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:22.162 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:22.162 altname enp217s0f0np0 00:17:22.162 altname ens818f0np0 00:17:22.162 inet 192.168.100.8/24 scope global mlx_0_0 00:17:22.162 valid_lft forever preferred_lft forever 00:17:22.162 06:57:42 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:22.162 06:57:42 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:22.162 06:57:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:22.162 06:57:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:22.162 06:57:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:22.162 06:57:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:22.162 06:57:42 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:22.162 06:57:42 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:22.162 06:57:42 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:22.162 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:22.162 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:22.162 altname enp217s0f1np1 00:17:22.162 altname ens818f1np1 00:17:22.162 inet 192.168.100.9/24 scope global mlx_0_1 00:17:22.162 valid_lft forever preferred_lft forever 00:17:22.162 06:57:42 -- nvmf/common.sh@410 -- # return 0 00:17:22.162 06:57:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:22.162 06:57:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:22.162 06:57:42 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:22.162 06:57:42 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:22.162 06:57:42 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:22.162 06:57:42 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:22.162 06:57:42 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:22.162 06:57:42 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:22.162 06:57:42 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:22.162 06:57:42 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:22.162 06:57:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:22.162 06:57:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.162 06:57:42 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:22.162 06:57:42 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:22.162 06:57:42 -- nvmf/common.sh@104 -- # continue 2 00:17:22.162 06:57:42 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:22.162 06:57:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.162 06:57:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:22.162 06:57:42 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.162 06:57:42 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:22.162 06:57:42 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:22.162 06:57:42 -- nvmf/common.sh@104 -- # continue 2 00:17:22.162 06:57:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:22.162 06:57:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:22.162 06:57:42 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:22.162 06:57:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:22.162 06:57:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:22.162 06:57:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:22.162 06:57:42 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:22.162 06:57:42 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:22.162 06:57:42 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:22.162 06:57:42 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:22.162 06:57:42 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:22.162 06:57:42 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:22.162 06:57:42 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:22.162 192.168.100.9' 00:17:22.162 06:57:42 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:22.162 192.168.100.9' 00:17:22.162 06:57:42 -- nvmf/common.sh@445 -- # head -n 1 00:17:22.162 06:57:42 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:22.162 06:57:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:22.162 192.168.100.9' 00:17:22.162 06:57:42 -- nvmf/common.sh@446 -- # tail -n +2 00:17:22.162 06:57:42 -- nvmf/common.sh@446 -- # head -n 1 00:17:22.162 06:57:42 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:22.162 06:57:42 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:22.162 06:57:42 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:22.162 06:57:42 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:22.162 06:57:42 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:22.162 06:57:42 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:22.162 06:57:42 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:22.162 06:57:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.162 06:57:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.162 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:17:22.162 06:57:42 -- nvmf/common.sh@469 -- # nvmfpid=1335722 00:17:22.162 06:57:42 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:22.162 06:57:42 -- nvmf/common.sh@470 -- # waitforlisten 1335722 00:17:22.162 06:57:42 -- common/autotest_common.sh@829 -- # '[' -z 1335722 ']' 00:17:22.162 06:57:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.162 06:57:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.162 06:57:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.162 06:57:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.162 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:17:22.162 [2024-12-15 06:57:42.575088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:22.162 [2024-12-15 06:57:42.575140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.162 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.162 [2024-12-15 06:57:42.645654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.162 [2024-12-15 06:57:42.684536] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.162 [2024-12-15 06:57:42.684646] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.162 [2024-12-15 06:57:42.684656] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.162 [2024-12-15 06:57:42.684665] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.162 [2024-12-15 06:57:42.684710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.162 [2024-12-15 06:57:42.684822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.162 [2024-12-15 06:57:42.684905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:22.162 [2024-12-15 06:57:42.684907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.162 06:57:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.162 06:57:43 -- common/autotest_common.sh@862 -- # return 0 00:17:22.162 06:57:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:22.162 06:57:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:22.162 06:57:43 -- common/autotest_common.sh@10 -- # set +x 00:17:22.162 06:57:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.162 06:57:43 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:22.162 06:57:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.162 06:57:43 -- common/autotest_common.sh@10 -- # set +x 00:17:22.162 [2024-12-15 06:57:43.479278] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bab0d0/0x1baf5a0) succeed. 00:17:22.162 [2024-12-15 06:57:43.488447] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bac670/0x1bf0c40) succeed. 00:17:22.162 06:57:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.162 06:57:43 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:22.162 06:57:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.162 06:57:43 -- common/autotest_common.sh@10 -- # set +x 00:17:22.162 Malloc0 00:17:22.162 06:57:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.162 06:57:43 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:22.162 06:57:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.162 06:57:43 -- common/autotest_common.sh@10 -- # set +x 00:17:22.163 Malloc1 00:17:22.163 06:57:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.163 06:57:43 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:22.163 06:57:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.163 06:57:43 -- common/autotest_common.sh@10 -- # set +x 00:17:22.163 06:57:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.163 06:57:43 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:22.163 06:57:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.163 06:57:43 -- common/autotest_common.sh@10 -- # set +x 00:17:22.163 06:57:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.163 06:57:43 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:22.163 06:57:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.163 06:57:43 -- common/autotest_common.sh@10 -- # set +x 00:17:22.163 06:57:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.163 06:57:43 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:22.163 06:57:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.163 06:57:43 -- common/autotest_common.sh@10 -- # set +x 00:17:22.163 [2024-12-15 06:57:43.685849] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:22.163 06:57:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.163 06:57:43 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:22.163 06:57:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.163 06:57:43 -- common/autotest_common.sh@10 -- # set +x 00:17:22.163 06:57:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.163 06:57:43 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:17:22.163 00:17:22.163 Discovery Log Number of Records 2, Generation counter 2 00:17:22.163 =====Discovery Log Entry 0====== 00:17:22.163 trtype: rdma 00:17:22.163 adrfam: ipv4 00:17:22.163 subtype: current discovery subsystem 00:17:22.163 treq: not required 00:17:22.163 portid: 0 00:17:22.163 trsvcid: 4420 00:17:22.163 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:22.163 traddr: 192.168.100.8 00:17:22.163 eflags: explicit discovery connections, duplicate discovery information 00:17:22.163 rdma_prtype: not specified 00:17:22.163 rdma_qptype: connected 00:17:22.163 rdma_cms: rdma-cm 00:17:22.163 rdma_pkey: 0x0000 00:17:22.163 =====Discovery Log Entry 1====== 00:17:22.163 trtype: rdma 00:17:22.163 adrfam: ipv4 00:17:22.163 subtype: nvme subsystem 00:17:22.163 treq: not required 00:17:22.163 portid: 0 00:17:22.163 trsvcid: 4420 00:17:22.163 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:22.163 traddr: 192.168.100.8 00:17:22.163 eflags: none 00:17:22.163 rdma_prtype: not specified 00:17:22.163 rdma_qptype: connected 00:17:22.163 rdma_cms: rdma-cm 00:17:22.163 rdma_pkey: 0x0000 00:17:22.163 06:57:43 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:22.421 06:57:43 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:22.421 06:57:43 -- nvmf/common.sh@510 -- # local dev _ 00:17:22.421 06:57:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:22.421 06:57:43 -- nvmf/common.sh@509 -- # nvme list 00:17:22.421 06:57:43 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:22.421 06:57:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:22.421 06:57:43 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:22.421 06:57:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:22.421 06:57:43 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:22.421 06:57:43 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:23.359 06:57:44 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:23.359 06:57:44 -- common/autotest_common.sh@1187 -- # local i=0 00:17:23.359 06:57:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.359 06:57:44 -- common/autotest_common.sh@1189 -- # [[ -n 2 ]] 00:17:23.359 06:57:44 -- common/autotest_common.sh@1190 -- # nvme_device_counter=2 00:17:23.359 06:57:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:25.266 06:57:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:25.266 06:57:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:25.266 06:57:46 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.266 06:57:46 -- common/autotest_common.sh@1196 -- # nvme_devices=2 00:17:25.266 06:57:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.266 06:57:46 -- common/autotest_common.sh@1197 -- # return 0 00:17:25.266 06:57:46 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:25.266 06:57:46 -- nvmf/common.sh@510 -- # local dev _ 00:17:25.266 06:57:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.266 06:57:46 -- nvmf/common.sh@509 -- # nvme list 00:17:25.266 06:57:46 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:25.266 06:57:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.266 06:57:46 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:25.266 06:57:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.266 06:57:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:25.266 06:57:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:25.266 06:57:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.266 06:57:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:25.266 06:57:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:25.266 06:57:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.266 06:57:46 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:25.266 /dev/nvme0n2 ]] 00:17:25.266 06:57:46 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:25.266 06:57:46 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:25.266 06:57:46 -- nvmf/common.sh@510 -- # local dev _ 00:17:25.266 06:57:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.266 06:57:46 -- nvmf/common.sh@509 -- # nvme list 00:17:25.266 06:57:46 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:25.266 06:57:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.266 06:57:46 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:25.266 06:57:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.266 06:57:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:25.266 06:57:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:25.266 06:57:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.266 06:57:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:25.266 06:57:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:25.266 06:57:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.266 06:57:46 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:25.266 06:57:46 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:26.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.645 06:57:47 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:26.645 06:57:47 -- common/autotest_common.sh@1208 -- # local i=0 00:17:26.645 06:57:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:26.645 06:57:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.645 06:57:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:26.645 06:57:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.645 06:57:47 -- common/autotest_common.sh@1220 -- # return 0 00:17:26.645 06:57:47 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:26.645 06:57:47 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.645 06:57:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.645 06:57:47 -- common/autotest_common.sh@10 -- # set +x 00:17:26.645 06:57:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.645 06:57:47 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:26.645 06:57:47 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:26.645 06:57:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:26.645 06:57:47 -- nvmf/common.sh@116 -- # sync 00:17:26.645 06:57:47 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:26.645 06:57:47 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:26.645 06:57:47 -- nvmf/common.sh@119 -- # set +e 00:17:26.645 06:57:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:26.645 06:57:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:26.645 rmmod nvme_rdma 00:17:26.645 rmmod nvme_fabrics 00:17:26.645 06:57:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:26.646 06:57:47 -- nvmf/common.sh@123 -- # set -e 00:17:26.646 06:57:47 -- nvmf/common.sh@124 -- # return 0 00:17:26.646 06:57:47 -- nvmf/common.sh@477 -- # '[' -n 1335722 ']' 00:17:26.646 06:57:47 -- nvmf/common.sh@478 -- # killprocess 1335722 00:17:26.646 06:57:47 -- common/autotest_common.sh@936 -- # '[' -z 1335722 ']' 00:17:26.646 06:57:47 -- common/autotest_common.sh@940 -- # kill -0 1335722 00:17:26.646 06:57:47 -- common/autotest_common.sh@941 -- # uname 00:17:26.646 06:57:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:26.646 06:57:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1335722 00:17:26.646 06:57:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:26.646 06:57:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:26.646 06:57:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1335722' 00:17:26.646 killing process with pid 1335722 00:17:26.646 06:57:48 -- common/autotest_common.sh@955 -- # kill 1335722 00:17:26.646 06:57:48 -- common/autotest_common.sh@960 -- # wait 1335722 00:17:26.905 06:57:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:26.905 06:57:48 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:26.905 00:17:26.905 real 0m12.900s 00:17:26.905 user 0m24.387s 00:17:26.905 sys 0m5.901s 00:17:26.905 06:57:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:26.905 06:57:48 -- common/autotest_common.sh@10 -- # set +x 00:17:26.905 ************************************ 00:17:26.905 END TEST nvmf_nvme_cli 00:17:26.905 ************************************ 00:17:26.905 06:57:48 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:17:26.905 06:57:48 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:26.905 06:57:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:26.905 06:57:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.905 06:57:48 -- common/autotest_common.sh@10 -- # set +x 00:17:26.905 ************************************ 00:17:26.905 START TEST nvmf_host_management 00:17:26.905 ************************************ 00:17:26.905 06:57:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:26.905 * Looking for test storage... 00:17:26.905 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:26.905 06:57:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:26.905 06:57:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:26.905 06:57:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:26.905 06:57:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:26.905 06:57:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:26.905 06:57:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:26.905 06:57:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:26.905 06:57:48 -- scripts/common.sh@335 -- # IFS=.-: 00:17:26.905 06:57:48 -- scripts/common.sh@335 -- # read -ra ver1 00:17:26.905 06:57:48 -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.905 06:57:48 -- scripts/common.sh@336 -- # read -ra ver2 00:17:26.905 06:57:48 -- scripts/common.sh@337 -- # local 'op=<' 00:17:26.905 06:57:48 -- scripts/common.sh@339 -- # ver1_l=2 00:17:26.905 06:57:48 -- scripts/common.sh@340 -- # ver2_l=1 00:17:26.905 06:57:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:26.905 06:57:48 -- scripts/common.sh@343 -- # case "$op" in 00:17:26.905 06:57:48 -- scripts/common.sh@344 -- # : 1 00:17:26.905 06:57:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:26.905 06:57:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.905 06:57:48 -- scripts/common.sh@364 -- # decimal 1 00:17:26.905 06:57:48 -- scripts/common.sh@352 -- # local d=1 00:17:26.905 06:57:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.905 06:57:48 -- scripts/common.sh@354 -- # echo 1 00:17:26.905 06:57:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:26.905 06:57:48 -- scripts/common.sh@365 -- # decimal 2 00:17:26.905 06:57:48 -- scripts/common.sh@352 -- # local d=2 00:17:26.905 06:57:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.905 06:57:48 -- scripts/common.sh@354 -- # echo 2 00:17:26.905 06:57:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:26.905 06:57:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:26.905 06:57:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:26.905 06:57:48 -- scripts/common.sh@367 -- # return 0 00:17:26.905 06:57:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.905 06:57:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:26.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.905 --rc genhtml_branch_coverage=1 00:17:26.905 --rc genhtml_function_coverage=1 00:17:26.905 --rc genhtml_legend=1 00:17:26.905 --rc geninfo_all_blocks=1 00:17:26.905 --rc geninfo_unexecuted_blocks=1 00:17:26.905 00:17:26.905 ' 00:17:26.905 06:57:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:26.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.905 --rc genhtml_branch_coverage=1 00:17:26.905 --rc genhtml_function_coverage=1 00:17:26.905 --rc genhtml_legend=1 00:17:26.905 --rc geninfo_all_blocks=1 00:17:26.905 --rc geninfo_unexecuted_blocks=1 00:17:26.905 00:17:26.905 ' 00:17:26.905 06:57:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:26.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.905 --rc genhtml_branch_coverage=1 00:17:26.905 --rc genhtml_function_coverage=1 00:17:26.905 --rc genhtml_legend=1 00:17:26.905 --rc geninfo_all_blocks=1 00:17:26.905 --rc geninfo_unexecuted_blocks=1 00:17:26.905 00:17:26.905 ' 00:17:26.905 06:57:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:26.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.905 --rc genhtml_branch_coverage=1 00:17:26.905 --rc genhtml_function_coverage=1 00:17:26.905 --rc genhtml_legend=1 00:17:26.905 --rc geninfo_all_blocks=1 00:17:26.905 --rc geninfo_unexecuted_blocks=1 00:17:26.905 00:17:26.905 ' 00:17:26.905 06:57:48 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.905 06:57:48 -- nvmf/common.sh@7 -- # uname -s 00:17:26.905 06:57:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.905 06:57:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.905 06:57:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.905 06:57:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.905 06:57:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.905 06:57:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.905 06:57:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.905 06:57:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.905 06:57:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.905 06:57:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.165 06:57:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:27.165 06:57:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:27.165 06:57:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.165 06:57:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.165 06:57:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.165 06:57:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:27.165 06:57:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.165 06:57:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.165 06:57:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.165 06:57:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.165 06:57:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.165 06:57:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.165 06:57:48 -- paths/export.sh@5 -- # export PATH 00:17:27.165 06:57:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.165 06:57:48 -- nvmf/common.sh@46 -- # : 0 00:17:27.165 06:57:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:27.166 06:57:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:27.166 06:57:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:27.166 06:57:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.166 06:57:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.166 06:57:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:27.166 06:57:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:27.166 06:57:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:27.166 06:57:48 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.166 06:57:48 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.166 06:57:48 -- target/host_management.sh@104 -- # nvmftestinit 00:17:27.166 06:57:48 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:27.166 06:57:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.166 06:57:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:27.166 06:57:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:27.166 06:57:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:27.166 06:57:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.166 06:57:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.166 06:57:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.166 06:57:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:27.166 06:57:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:27.166 06:57:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:27.166 06:57:48 -- common/autotest_common.sh@10 -- # set +x 00:17:33.803 06:57:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:33.803 06:57:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:33.803 06:57:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:33.803 06:57:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:33.803 06:57:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:33.803 06:57:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:33.803 06:57:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:33.803 06:57:54 -- nvmf/common.sh@294 -- # net_devs=() 00:17:33.803 06:57:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:33.803 06:57:54 -- nvmf/common.sh@295 -- # e810=() 00:17:33.803 06:57:54 -- nvmf/common.sh@295 -- # local -ga e810 00:17:33.803 06:57:54 -- nvmf/common.sh@296 -- # x722=() 00:17:33.803 06:57:54 -- nvmf/common.sh@296 -- # local -ga x722 00:17:33.803 06:57:54 -- nvmf/common.sh@297 -- # mlx=() 00:17:33.803 06:57:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:33.803 06:57:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.803 06:57:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.803 06:57:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.803 06:57:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.803 06:57:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.803 06:57:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.803 06:57:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.803 06:57:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.803 06:57:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.803 06:57:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.803 06:57:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.803 06:57:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:33.803 06:57:54 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:33.803 06:57:54 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:33.803 06:57:54 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:33.803 06:57:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:33.803 06:57:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:33.803 06:57:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:33.803 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:33.803 06:57:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.803 06:57:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:33.803 06:57:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:33.803 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:33.803 06:57:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.803 06:57:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:33.803 06:57:54 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:33.803 06:57:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.803 06:57:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:33.803 06:57:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.803 06:57:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:33.803 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:33.803 06:57:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.803 06:57:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:33.803 06:57:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.803 06:57:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:33.803 06:57:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.803 06:57:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:33.803 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:33.803 06:57:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.803 06:57:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:33.803 06:57:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:33.803 06:57:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:33.803 06:57:54 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:33.803 06:57:54 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:33.803 06:57:54 -- nvmf/common.sh@57 -- # uname 00:17:33.803 06:57:54 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:33.803 06:57:54 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:33.803 06:57:54 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:33.804 06:57:54 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:33.804 06:57:54 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:33.804 06:57:54 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:33.804 06:57:54 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:33.804 06:57:54 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:33.804 06:57:54 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:33.804 06:57:54 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:33.804 06:57:54 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:33.804 06:57:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.804 06:57:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:33.804 06:57:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:33.804 06:57:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.804 06:57:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:33.804 06:57:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:33.804 06:57:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.804 06:57:55 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.804 06:57:55 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:33.804 06:57:55 -- nvmf/common.sh@104 -- # continue 2 00:17:33.804 06:57:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:33.804 06:57:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.804 06:57:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.804 06:57:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.804 06:57:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.804 06:57:55 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:33.804 06:57:55 -- nvmf/common.sh@104 -- # continue 2 00:17:33.804 06:57:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:33.804 06:57:55 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:33.804 06:57:55 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:33.804 06:57:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:33.804 06:57:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:33.804 06:57:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:33.804 06:57:55 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:33.804 06:57:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:33.804 06:57:55 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:33.804 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.804 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:33.804 altname enp217s0f0np0 00:17:33.804 altname ens818f0np0 00:17:33.804 inet 192.168.100.8/24 scope global mlx_0_0 00:17:33.804 valid_lft forever preferred_lft forever 00:17:33.804 06:57:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:33.804 06:57:55 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:33.804 06:57:55 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:33.804 06:57:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:33.804 06:57:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:33.804 06:57:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:33.804 06:57:55 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:33.804 06:57:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:33.804 06:57:55 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:33.804 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.804 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:33.804 altname enp217s0f1np1 00:17:33.804 altname ens818f1np1 00:17:33.804 inet 192.168.100.9/24 scope global mlx_0_1 00:17:33.804 valid_lft forever preferred_lft forever 00:17:33.804 06:57:55 -- nvmf/common.sh@410 -- # return 0 00:17:33.804 06:57:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:33.804 06:57:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:33.804 06:57:55 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:33.804 06:57:55 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:33.804 06:57:55 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:33.804 06:57:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.804 06:57:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:33.804 06:57:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:33.804 06:57:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.804 06:57:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:33.804 06:57:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:33.804 06:57:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.804 06:57:55 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.804 06:57:55 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:33.804 06:57:55 -- nvmf/common.sh@104 -- # continue 2 00:17:33.804 06:57:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:33.804 06:57:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.804 06:57:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.804 06:57:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.804 06:57:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.804 06:57:55 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:33.804 06:57:55 -- nvmf/common.sh@104 -- # continue 2 00:17:33.804 06:57:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:33.804 06:57:55 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:33.804 06:57:55 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:33.804 06:57:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:33.804 06:57:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:33.804 06:57:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:33.804 06:57:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:33.804 06:57:55 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:33.804 06:57:55 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:33.804 06:57:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:33.804 06:57:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:33.804 06:57:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:33.804 06:57:55 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:33.804 192.168.100.9' 00:17:33.804 06:57:55 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:33.804 192.168.100.9' 00:17:33.804 06:57:55 -- nvmf/common.sh@445 -- # head -n 1 00:17:33.804 06:57:55 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:33.804 06:57:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:33.804 192.168.100.9' 00:17:33.804 06:57:55 -- nvmf/common.sh@446 -- # tail -n +2 00:17:33.804 06:57:55 -- nvmf/common.sh@446 -- # head -n 1 00:17:33.804 06:57:55 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:33.804 06:57:55 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:33.804 06:57:55 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:33.804 06:57:55 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:33.804 06:57:55 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:33.804 06:57:55 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:33.804 06:57:55 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:33.804 06:57:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:33.804 06:57:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:33.804 06:57:55 -- common/autotest_common.sh@10 -- # set +x 00:17:33.804 ************************************ 00:17:33.804 START TEST nvmf_host_management 00:17:33.804 ************************************ 00:17:33.804 06:57:55 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:17:33.804 06:57:55 -- target/host_management.sh@69 -- # starttarget 00:17:33.804 06:57:55 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:33.804 06:57:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:33.804 06:57:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:33.804 06:57:55 -- common/autotest_common.sh@10 -- # set +x 00:17:33.804 06:57:55 -- nvmf/common.sh@469 -- # nvmfpid=1340035 00:17:33.804 06:57:55 -- nvmf/common.sh@470 -- # waitforlisten 1340035 00:17:33.804 06:57:55 -- common/autotest_common.sh@829 -- # '[' -z 1340035 ']' 00:17:33.804 06:57:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.804 06:57:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.804 06:57:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.804 06:57:55 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:33.804 06:57:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.804 06:57:55 -- common/autotest_common.sh@10 -- # set +x 00:17:33.804 [2024-12-15 06:57:55.216657] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:33.804 [2024-12-15 06:57:55.216704] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.804 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.804 [2024-12-15 06:57:55.286842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.804 [2024-12-15 06:57:55.324252] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:33.804 [2024-12-15 06:57:55.324363] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.804 [2024-12-15 06:57:55.324373] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.804 [2024-12-15 06:57:55.324381] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.804 [2024-12-15 06:57:55.324522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.804 [2024-12-15 06:57:55.324608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.804 [2024-12-15 06:57:55.324716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.804 [2024-12-15 06:57:55.324718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:34.741 06:57:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.741 06:57:56 -- common/autotest_common.sh@862 -- # return 0 00:17:34.741 06:57:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:34.742 06:57:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:34.742 06:57:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.742 06:57:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.742 06:57:56 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:34.742 06:57:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.742 06:57:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.742 [2024-12-15 06:57:56.103959] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x179a3c0/0x179e890) succeed. 00:17:34.742 [2024-12-15 06:57:56.113120] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x179b960/0x17dff30) succeed. 00:17:34.742 06:57:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.742 06:57:56 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:34.742 06:57:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.742 06:57:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.742 06:57:56 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:34.742 06:57:56 -- target/host_management.sh@23 -- # cat 00:17:34.742 06:57:56 -- target/host_management.sh@30 -- # rpc_cmd 00:17:34.742 06:57:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.742 06:57:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.742 Malloc0 00:17:34.742 [2024-12-15 06:57:56.291052] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:34.742 06:57:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.742 06:57:56 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:34.742 06:57:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:34.742 06:57:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.742 06:57:56 -- target/host_management.sh@73 -- # perfpid=1340305 00:17:34.742 06:57:56 -- target/host_management.sh@74 -- # waitforlisten 1340305 /var/tmp/bdevperf.sock 00:17:34.742 06:57:56 -- common/autotest_common.sh@829 -- # '[' -z 1340305 ']' 00:17:34.742 06:57:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.742 06:57:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.742 06:57:56 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:34.742 06:57:56 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:34.742 06:57:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.742 06:57:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.742 06:57:56 -- nvmf/common.sh@520 -- # config=() 00:17:34.742 06:57:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.742 06:57:56 -- nvmf/common.sh@520 -- # local subsystem config 00:17:34.742 06:57:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:34.742 06:57:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:34.742 { 00:17:34.742 "params": { 00:17:34.742 "name": "Nvme$subsystem", 00:17:34.742 "trtype": "$TEST_TRANSPORT", 00:17:34.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:34.742 "adrfam": "ipv4", 00:17:34.742 "trsvcid": "$NVMF_PORT", 00:17:34.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:34.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:34.742 "hdgst": ${hdgst:-false}, 00:17:34.742 "ddgst": ${ddgst:-false} 00:17:34.742 }, 00:17:34.742 "method": "bdev_nvme_attach_controller" 00:17:34.742 } 00:17:34.742 EOF 00:17:34.742 )") 00:17:34.742 06:57:56 -- nvmf/common.sh@542 -- # cat 00:17:34.742 06:57:56 -- nvmf/common.sh@544 -- # jq . 00:17:34.742 06:57:56 -- nvmf/common.sh@545 -- # IFS=, 00:17:34.742 06:57:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:34.742 "params": { 00:17:34.742 "name": "Nvme0", 00:17:34.742 "trtype": "rdma", 00:17:34.742 "traddr": "192.168.100.8", 00:17:34.742 "adrfam": "ipv4", 00:17:34.742 "trsvcid": "4420", 00:17:34.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:34.742 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:34.742 "hdgst": false, 00:17:34.742 "ddgst": false 00:17:34.742 }, 00:17:34.742 "method": "bdev_nvme_attach_controller" 00:17:34.742 }' 00:17:35.001 [2024-12-15 06:57:56.389969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:35.002 [2024-12-15 06:57:56.390032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340305 ] 00:17:35.002 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.002 [2024-12-15 06:57:56.462116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.002 [2024-12-15 06:57:56.498249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.261 Running I/O for 10 seconds... 00:17:35.830 06:57:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.830 06:57:57 -- common/autotest_common.sh@862 -- # return 0 00:17:35.830 06:57:57 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:35.830 06:57:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.830 06:57:57 -- common/autotest_common.sh@10 -- # set +x 00:17:35.830 06:57:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.830 06:57:57 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:35.830 06:57:57 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:35.830 06:57:57 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:35.830 06:57:57 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:35.830 06:57:57 -- target/host_management.sh@52 -- # local ret=1 00:17:35.830 06:57:57 -- target/host_management.sh@53 -- # local i 00:17:35.830 06:57:57 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:35.830 06:57:57 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:35.830 06:57:57 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:35.830 06:57:57 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:35.830 06:57:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.830 06:57:57 -- common/autotest_common.sh@10 -- # set +x 00:17:35.830 06:57:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.830 06:57:57 -- target/host_management.sh@55 -- # read_io_count=3205 00:17:35.830 06:57:57 -- target/host_management.sh@58 -- # '[' 3205 -ge 100 ']' 00:17:35.830 06:57:57 -- target/host_management.sh@59 -- # ret=0 00:17:35.830 06:57:57 -- target/host_management.sh@60 -- # break 00:17:35.830 06:57:57 -- target/host_management.sh@64 -- # return 0 00:17:35.830 06:57:57 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:35.830 06:57:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.830 06:57:57 -- common/autotest_common.sh@10 -- # set +x 00:17:35.830 06:57:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.830 06:57:57 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:35.830 06:57:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.830 06:57:57 -- common/autotest_common.sh@10 -- # set +x 00:17:35.830 06:57:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.830 06:57:57 -- target/host_management.sh@87 -- # sleep 1 00:17:36.768 [2024-12-15 06:57:58.283894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182700 00:17:36.768 [2024-12-15 06:57:58.283929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.283947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182500 00:17:36.768 [2024-12-15 06:57:58.283957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.283969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182700 00:17:36.768 [2024-12-15 06:57:58.283982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.283993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:17:36.768 [2024-12-15 06:57:58.284003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182700 00:17:36.768 [2024-12-15 06:57:58.284023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182700 00:17:36.768 [2024-12-15 06:57:58.284044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182600 00:17:36.768 [2024-12-15 06:57:58.284064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182500 00:17:36.768 [2024-12-15 06:57:58.284088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182500 00:17:36.768 [2024-12-15 06:57:58.284107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182600 00:17:36.768 [2024-12-15 06:57:58.284127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182600 00:17:36.768 [2024-12-15 06:57:58.284146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182600 00:17:36.768 [2024-12-15 06:57:58.284167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182400 00:17:36.768 [2024-12-15 06:57:58.284186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182400 00:17:36.768 [2024-12-15 06:57:58.284206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182600 00:17:36.768 [2024-12-15 06:57:58.284225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182700 00:17:36.768 [2024-12-15 06:57:58.284245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182700 00:17:36.768 [2024-12-15 06:57:58.284265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182600 00:17:36.768 [2024-12-15 06:57:58.284285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182000 00:17:36.768 [2024-12-15 06:57:58.284306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182700 00:17:36.768 [2024-12-15 06:57:58.284326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182500 00:17:36.768 [2024-12-15 06:57:58.284346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182600 00:17:36.768 [2024-12-15 06:57:58.284366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182600 00:17:36.768 [2024-12-15 06:57:58.284385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182000 00:17:36.768 [2024-12-15 06:57:58.284404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182700 00:17:36.768 [2024-12-15 06:57:58.284423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.768 [2024-12-15 06:57:58.284434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182700 00:17:36.769 [2024-12-15 06:57:58.284443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182400 00:17:36.769 [2024-12-15 06:57:58.284462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182000 00:17:36.769 [2024-12-15 06:57:58.284481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182400 00:17:36.769 [2024-12-15 06:57:58.284501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182000 00:17:36.769 [2024-12-15 06:57:58.284524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182700 00:17:36.769 [2024-12-15 06:57:58.284544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182500 00:17:36.769 [2024-12-15 06:57:58.284563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182600 00:17:36.769 [2024-12-15 06:57:58.284582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182500 00:17:36.769 [2024-12-15 06:57:58.284603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182000 00:17:36.769 [2024-12-15 06:57:58.284623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182500 00:17:36.769 [2024-12-15 06:57:58.284642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182600 00:17:36.769 [2024-12-15 06:57:58.284662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182400 00:17:36.769 [2024-12-15 06:57:58.284681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cab0000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6f3000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6d2000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6b1000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c690000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c987000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd65000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd44000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd23000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd02000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cce1000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ccc0000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0bf000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.284982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.284992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d09e000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.285001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.285011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfd8000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.285020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.285030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfb7000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.285039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.769 [2024-12-15 06:57:58.285049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf96000 len:0x10000 key:0x182300 00:17:36.769 [2024-12-15 06:57:58.285058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.770 [2024-12-15 06:57:58.285069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf75000 len:0x10000 key:0x182300 00:17:36.770 [2024-12-15 06:57:58.285077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.770 [2024-12-15 06:57:58.285088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf54000 len:0x10000 key:0x182300 00:17:36.770 [2024-12-15 06:57:58.285096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.770 [2024-12-15 06:57:58.285107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf33000 len:0x10000 key:0x182300 00:17:36.770 [2024-12-15 06:57:58.285116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.770 [2024-12-15 06:57:58.285126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf12000 len:0x10000 key:0x182300 00:17:36.770 [2024-12-15 06:57:58.285135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.770 [2024-12-15 06:57:58.285146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cef1000 len:0x10000 key:0x182300 00:17:36.770 [2024-12-15 06:57:58.285155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.770 [2024-12-15 06:57:58.285166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ced0000 len:0x10000 key:0x182300 00:17:36.770 [2024-12-15 06:57:58.285180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.770 [2024-12-15 06:57:58.285191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2cf000 len:0x10000 key:0x182300 00:17:36.770 [2024-12-15 06:57:58.285199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27516 cdw0:208f2000 sqhd:940a p:0 m:0 dnr:0 00:17:36.770 [2024-12-15 06:57:58.287050] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:17:36.770 [2024-12-15 06:57:58.287932] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:36.770 task offset: 47872 on job bdev=Nvme0n1 fails 00:17:36.770 00:17:36.770 Latency(us) 00:17:36.770 [2024-12-15T05:57:58.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.770 [2024-12-15T05:57:58.411Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:36.770 [2024-12-15T05:57:58.411Z] Job: Nvme0n1 ended in about 1.62 seconds with error 00:17:36.770 Verification LBA range: start 0x0 length 0x400 00:17:36.770 Nvme0n1 : 1.62 2109.27 131.83 39.47 0.00 29583.54 3289.91 1013343.85 00:17:36.770 [2024-12-15T05:57:58.411Z] =================================================================================================================== 00:17:36.770 [2024-12-15T05:57:58.411Z] Total : 2109.27 131.83 39.47 0.00 29583.54 3289.91 1013343.85 00:17:36.770 [2024-12-15 06:57:58.289527] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:36.770 06:57:58 -- target/host_management.sh@91 -- # kill -9 1340305 00:17:36.770 06:57:58 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:36.770 06:57:58 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:36.770 06:57:58 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:36.770 06:57:58 -- nvmf/common.sh@520 -- # config=() 00:17:36.770 06:57:58 -- nvmf/common.sh@520 -- # local subsystem config 00:17:36.770 06:57:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:36.770 06:57:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:36.770 { 00:17:36.770 "params": { 00:17:36.770 "name": "Nvme$subsystem", 00:17:36.770 "trtype": "$TEST_TRANSPORT", 00:17:36.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.770 "adrfam": "ipv4", 00:17:36.770 "trsvcid": "$NVMF_PORT", 00:17:36.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.770 "hdgst": ${hdgst:-false}, 00:17:36.770 "ddgst": ${ddgst:-false} 00:17:36.770 }, 00:17:36.770 "method": "bdev_nvme_attach_controller" 00:17:36.770 } 00:17:36.770 EOF 00:17:36.770 )") 00:17:36.770 06:57:58 -- nvmf/common.sh@542 -- # cat 00:17:36.770 06:57:58 -- nvmf/common.sh@544 -- # jq . 00:17:36.770 06:57:58 -- nvmf/common.sh@545 -- # IFS=, 00:17:36.770 06:57:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:36.770 "params": { 00:17:36.770 "name": "Nvme0", 00:17:36.770 "trtype": "rdma", 00:17:36.770 "traddr": "192.168.100.8", 00:17:36.770 "adrfam": "ipv4", 00:17:36.770 "trsvcid": "4420", 00:17:36.770 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:36.770 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:36.770 "hdgst": false, 00:17:36.770 "ddgst": false 00:17:36.770 }, 00:17:36.770 "method": "bdev_nvme_attach_controller" 00:17:36.770 }' 00:17:36.770 [2024-12-15 06:57:58.343212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:36.770 [2024-12-15 06:57:58.343262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340629 ] 00:17:36.770 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.029 [2024-12-15 06:57:58.414331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.029 [2024-12-15 06:57:58.450702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.029 Running I/O for 1 seconds... 00:17:38.409 00:17:38.409 Latency(us) 00:17:38.409 [2024-12-15T05:58:00.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.409 [2024-12-15T05:58:00.050Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:38.409 Verification LBA range: start 0x0 length 0x400 00:17:38.409 Nvme0n1 : 1.01 5620.96 351.31 0.00 0.00 11213.74 471.86 24117.25 00:17:38.409 [2024-12-15T05:58:00.050Z] =================================================================================================================== 00:17:38.409 [2024-12-15T05:58:00.050Z] Total : 5620.96 351.31 0.00 0.00 11213.74 471.86 24117.25 00:17:38.409 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1340305 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:38.409 06:57:59 -- target/host_management.sh@101 -- # stoptarget 00:17:38.409 06:57:59 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:38.409 06:57:59 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:38.409 06:57:59 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:38.409 06:57:59 -- target/host_management.sh@40 -- # nvmftestfini 00:17:38.409 06:57:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:38.409 06:57:59 -- nvmf/common.sh@116 -- # sync 00:17:38.409 06:57:59 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:38.409 06:57:59 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:38.409 06:57:59 -- nvmf/common.sh@119 -- # set +e 00:17:38.409 06:57:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:38.409 06:57:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:38.409 rmmod nvme_rdma 00:17:38.409 rmmod nvme_fabrics 00:17:38.409 06:57:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:38.409 06:57:59 -- nvmf/common.sh@123 -- # set -e 00:17:38.409 06:57:59 -- nvmf/common.sh@124 -- # return 0 00:17:38.409 06:57:59 -- nvmf/common.sh@477 -- # '[' -n 1340035 ']' 00:17:38.409 06:57:59 -- nvmf/common.sh@478 -- # killprocess 1340035 00:17:38.409 06:57:59 -- common/autotest_common.sh@936 -- # '[' -z 1340035 ']' 00:17:38.409 06:57:59 -- common/autotest_common.sh@940 -- # kill -0 1340035 00:17:38.409 06:57:59 -- common/autotest_common.sh@941 -- # uname 00:17:38.409 06:57:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.409 06:57:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1340035 00:17:38.409 06:57:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:38.409 06:57:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:38.409 06:57:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1340035' 00:17:38.409 killing process with pid 1340035 00:17:38.409 06:57:59 -- common/autotest_common.sh@955 -- # kill 1340035 00:17:38.409 06:57:59 -- common/autotest_common.sh@960 -- # wait 1340035 00:17:38.668 [2024-12-15 06:58:00.204960] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:38.668 06:58:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:38.668 06:58:00 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:38.668 00:17:38.668 real 0m5.065s 00:17:38.668 user 0m22.714s 00:17:38.668 sys 0m1.052s 00:17:38.668 06:58:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:38.668 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:17:38.668 ************************************ 00:17:38.668 END TEST nvmf_host_management 00:17:38.668 ************************************ 00:17:38.668 06:58:00 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:38.668 00:17:38.668 real 0m11.921s 00:17:38.668 user 0m24.672s 00:17:38.668 sys 0m6.175s 00:17:38.668 06:58:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:38.668 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:17:38.668 ************************************ 00:17:38.668 END TEST nvmf_host_management 00:17:38.668 ************************************ 00:17:38.928 06:58:00 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:38.928 06:58:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:38.928 06:58:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:38.928 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:17:38.928 ************************************ 00:17:38.928 START TEST nvmf_lvol 00:17:38.928 ************************************ 00:17:38.928 06:58:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:38.928 * Looking for test storage... 00:17:38.928 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:38.928 06:58:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:38.928 06:58:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:38.928 06:58:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:38.928 06:58:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:38.928 06:58:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:38.928 06:58:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:38.928 06:58:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:38.928 06:58:00 -- scripts/common.sh@335 -- # IFS=.-: 00:17:38.928 06:58:00 -- scripts/common.sh@335 -- # read -ra ver1 00:17:38.928 06:58:00 -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.928 06:58:00 -- scripts/common.sh@336 -- # read -ra ver2 00:17:38.928 06:58:00 -- scripts/common.sh@337 -- # local 'op=<' 00:17:38.928 06:58:00 -- scripts/common.sh@339 -- # ver1_l=2 00:17:38.928 06:58:00 -- scripts/common.sh@340 -- # ver2_l=1 00:17:38.928 06:58:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:38.928 06:58:00 -- scripts/common.sh@343 -- # case "$op" in 00:17:38.928 06:58:00 -- scripts/common.sh@344 -- # : 1 00:17:38.928 06:58:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:38.928 06:58:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.928 06:58:00 -- scripts/common.sh@364 -- # decimal 1 00:17:38.928 06:58:00 -- scripts/common.sh@352 -- # local d=1 00:17:38.928 06:58:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.928 06:58:00 -- scripts/common.sh@354 -- # echo 1 00:17:38.928 06:58:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:38.928 06:58:00 -- scripts/common.sh@365 -- # decimal 2 00:17:38.928 06:58:00 -- scripts/common.sh@352 -- # local d=2 00:17:38.928 06:58:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.928 06:58:00 -- scripts/common.sh@354 -- # echo 2 00:17:38.928 06:58:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:38.928 06:58:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:38.928 06:58:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:38.928 06:58:00 -- scripts/common.sh@367 -- # return 0 00:17:38.928 06:58:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.928 06:58:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:38.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.928 --rc genhtml_branch_coverage=1 00:17:38.928 --rc genhtml_function_coverage=1 00:17:38.928 --rc genhtml_legend=1 00:17:38.928 --rc geninfo_all_blocks=1 00:17:38.928 --rc geninfo_unexecuted_blocks=1 00:17:38.928 00:17:38.928 ' 00:17:38.928 06:58:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:38.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.928 --rc genhtml_branch_coverage=1 00:17:38.928 --rc genhtml_function_coverage=1 00:17:38.928 --rc genhtml_legend=1 00:17:38.928 --rc geninfo_all_blocks=1 00:17:38.928 --rc geninfo_unexecuted_blocks=1 00:17:38.928 00:17:38.928 ' 00:17:38.928 06:58:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:38.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.928 --rc genhtml_branch_coverage=1 00:17:38.928 --rc genhtml_function_coverage=1 00:17:38.928 --rc genhtml_legend=1 00:17:38.928 --rc geninfo_all_blocks=1 00:17:38.928 --rc geninfo_unexecuted_blocks=1 00:17:38.928 00:17:38.928 ' 00:17:38.928 06:58:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:38.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.928 --rc genhtml_branch_coverage=1 00:17:38.928 --rc genhtml_function_coverage=1 00:17:38.928 --rc genhtml_legend=1 00:17:38.928 --rc geninfo_all_blocks=1 00:17:38.928 --rc geninfo_unexecuted_blocks=1 00:17:38.928 00:17:38.928 ' 00:17:38.928 06:58:00 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.928 06:58:00 -- nvmf/common.sh@7 -- # uname -s 00:17:38.928 06:58:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.928 06:58:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.928 06:58:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.928 06:58:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.928 06:58:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.928 06:58:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.928 06:58:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.928 06:58:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.928 06:58:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.928 06:58:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.928 06:58:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:38.928 06:58:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:38.928 06:58:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.928 06:58:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.928 06:58:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.928 06:58:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:38.928 06:58:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.928 06:58:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.928 06:58:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.928 06:58:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.929 06:58:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.929 06:58:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.929 06:58:00 -- paths/export.sh@5 -- # export PATH 00:17:38.929 06:58:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.929 06:58:00 -- nvmf/common.sh@46 -- # : 0 00:17:38.929 06:58:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:38.929 06:58:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:38.929 06:58:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:38.929 06:58:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.929 06:58:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.929 06:58:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:38.929 06:58:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:38.929 06:58:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:38.929 06:58:00 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.929 06:58:00 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.929 06:58:00 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:38.929 06:58:00 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:38.929 06:58:00 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:38.929 06:58:00 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:38.929 06:58:00 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:38.929 06:58:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.929 06:58:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:38.929 06:58:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:38.929 06:58:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:38.929 06:58:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.929 06:58:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.929 06:58:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.929 06:58:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:38.929 06:58:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:38.929 06:58:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:38.929 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:17:45.543 06:58:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:45.543 06:58:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:45.543 06:58:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:45.543 06:58:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:45.543 06:58:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:45.543 06:58:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:45.543 06:58:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:45.543 06:58:06 -- nvmf/common.sh@294 -- # net_devs=() 00:17:45.543 06:58:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:45.543 06:58:06 -- nvmf/common.sh@295 -- # e810=() 00:17:45.543 06:58:06 -- nvmf/common.sh@295 -- # local -ga e810 00:17:45.543 06:58:06 -- nvmf/common.sh@296 -- # x722=() 00:17:45.543 06:58:06 -- nvmf/common.sh@296 -- # local -ga x722 00:17:45.543 06:58:06 -- nvmf/common.sh@297 -- # mlx=() 00:17:45.543 06:58:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:45.543 06:58:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.543 06:58:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.543 06:58:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.543 06:58:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.543 06:58:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.543 06:58:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.543 06:58:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.543 06:58:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.543 06:58:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.543 06:58:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.543 06:58:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.543 06:58:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:45.543 06:58:06 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:45.543 06:58:06 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:45.543 06:58:06 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:45.543 06:58:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:45.543 06:58:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:45.543 06:58:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:45.543 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:45.543 06:58:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:45.543 06:58:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:45.543 06:58:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:45.543 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:45.543 06:58:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:45.543 06:58:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:45.543 06:58:06 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:45.543 06:58:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:45.543 06:58:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.544 06:58:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:45.544 06:58:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.544 06:58:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:45.544 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:45.544 06:58:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.544 06:58:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:45.544 06:58:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.544 06:58:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:45.544 06:58:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.544 06:58:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:45.544 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:45.544 06:58:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.544 06:58:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:45.544 06:58:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:45.544 06:58:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:45.544 06:58:06 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:45.544 06:58:06 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:45.544 06:58:06 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:45.544 06:58:06 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:45.544 06:58:06 -- nvmf/common.sh@57 -- # uname 00:17:45.544 06:58:06 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:45.544 06:58:06 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:45.544 06:58:06 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:45.544 06:58:06 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:45.544 06:58:06 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:45.544 06:58:06 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:45.544 06:58:06 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:45.544 06:58:06 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:45.544 06:58:06 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:45.544 06:58:06 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:45.544 06:58:06 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:45.544 06:58:06 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:45.544 06:58:06 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:45.544 06:58:06 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:45.544 06:58:06 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:45.544 06:58:06 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:45.544 06:58:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:45.544 06:58:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.544 06:58:06 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:45.544 06:58:06 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:45.544 06:58:06 -- nvmf/common.sh@104 -- # continue 2 00:17:45.544 06:58:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:45.544 06:58:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.544 06:58:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:45.544 06:58:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.544 06:58:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:45.544 06:58:06 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:45.544 06:58:06 -- nvmf/common.sh@104 -- # continue 2 00:17:45.544 06:58:06 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:45.544 06:58:06 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:45.544 06:58:06 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:45.544 06:58:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:45.544 06:58:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:45.544 06:58:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:45.544 06:58:06 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:45.544 06:58:06 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:45.544 06:58:06 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:45.544 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:45.544 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:45.544 altname enp217s0f0np0 00:17:45.544 altname ens818f0np0 00:17:45.544 inet 192.168.100.8/24 scope global mlx_0_0 00:17:45.544 valid_lft forever preferred_lft forever 00:17:45.544 06:58:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:45.544 06:58:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:45.544 06:58:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:45.544 06:58:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:45.544 06:58:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:45.544 06:58:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:45.544 06:58:07 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:45.544 06:58:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:45.544 06:58:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:45.544 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:45.544 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:45.544 altname enp217s0f1np1 00:17:45.544 altname ens818f1np1 00:17:45.544 inet 192.168.100.9/24 scope global mlx_0_1 00:17:45.544 valid_lft forever preferred_lft forever 00:17:45.544 06:58:07 -- nvmf/common.sh@410 -- # return 0 00:17:45.544 06:58:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:45.544 06:58:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:45.544 06:58:07 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:45.544 06:58:07 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:45.544 06:58:07 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:45.544 06:58:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:45.544 06:58:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:45.544 06:58:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:45.544 06:58:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:45.544 06:58:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:45.544 06:58:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:45.544 06:58:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.544 06:58:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:45.544 06:58:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:45.544 06:58:07 -- nvmf/common.sh@104 -- # continue 2 00:17:45.544 06:58:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:45.544 06:58:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.544 06:58:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:45.544 06:58:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.544 06:58:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:45.544 06:58:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:45.544 06:58:07 -- nvmf/common.sh@104 -- # continue 2 00:17:45.544 06:58:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:45.544 06:58:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:45.544 06:58:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:45.544 06:58:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:45.544 06:58:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:45.544 06:58:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:45.544 06:58:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:45.544 06:58:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:45.544 06:58:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:45.544 06:58:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:45.544 06:58:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:45.544 06:58:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:45.544 06:58:07 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:45.544 192.168.100.9' 00:17:45.544 06:58:07 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:45.544 192.168.100.9' 00:17:45.544 06:58:07 -- nvmf/common.sh@445 -- # head -n 1 00:17:45.544 06:58:07 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:45.544 06:58:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:45.544 192.168.100.9' 00:17:45.544 06:58:07 -- nvmf/common.sh@446 -- # tail -n +2 00:17:45.544 06:58:07 -- nvmf/common.sh@446 -- # head -n 1 00:17:45.544 06:58:07 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:45.544 06:58:07 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:45.544 06:58:07 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:45.544 06:58:07 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:45.544 06:58:07 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:45.544 06:58:07 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:45.544 06:58:07 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:45.544 06:58:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:45.544 06:58:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:45.544 06:58:07 -- common/autotest_common.sh@10 -- # set +x 00:17:45.544 06:58:07 -- nvmf/common.sh@469 -- # nvmfpid=1344219 00:17:45.544 06:58:07 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:45.544 06:58:07 -- nvmf/common.sh@470 -- # waitforlisten 1344219 00:17:45.544 06:58:07 -- common/autotest_common.sh@829 -- # '[' -z 1344219 ']' 00:17:45.544 06:58:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.544 06:58:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.544 06:58:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.544 06:58:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.544 06:58:07 -- common/autotest_common.sh@10 -- # set +x 00:17:45.804 [2024-12-15 06:58:07.184686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:45.804 [2024-12-15 06:58:07.184733] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.804 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.804 [2024-12-15 06:58:07.255110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:45.804 [2024-12-15 06:58:07.291032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:45.804 [2024-12-15 06:58:07.291165] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.804 [2024-12-15 06:58:07.291175] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.804 [2024-12-15 06:58:07.291184] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.804 [2024-12-15 06:58:07.291236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.804 [2024-12-15 06:58:07.291334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.804 [2024-12-15 06:58:07.291334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.373 06:58:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.373 06:58:07 -- common/autotest_common.sh@862 -- # return 0 00:17:46.373 06:58:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:46.373 06:58:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:46.373 06:58:07 -- common/autotest_common.sh@10 -- # set +x 00:17:46.632 06:58:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.632 06:58:08 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:46.632 [2024-12-15 06:58:08.223561] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15a3600/0x15a7ab0) succeed. 00:17:46.632 [2024-12-15 06:58:08.232704] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15a4b00/0x15e9150) succeed. 00:17:46.891 06:58:08 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:47.150 06:58:08 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:47.150 06:58:08 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:47.150 06:58:08 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:47.150 06:58:08 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:47.409 06:58:08 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:47.669 06:58:09 -- target/nvmf_lvol.sh@29 -- # lvs=ec6634dd-133e-49e4-bc86-c3b095758daa 00:17:47.669 06:58:09 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ec6634dd-133e-49e4-bc86-c3b095758daa lvol 20 00:17:47.928 06:58:09 -- target/nvmf_lvol.sh@32 -- # lvol=dd578de5-6a12-49ea-b6f1-fa4b3048dbeb 00:17:47.928 06:58:09 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:47.928 06:58:09 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dd578de5-6a12-49ea-b6f1-fa4b3048dbeb 00:17:48.187 06:58:09 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:48.446 [2024-12-15 06:58:09.848246] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:48.446 06:58:09 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:48.446 06:58:10 -- target/nvmf_lvol.sh@42 -- # perf_pid=1344699 00:17:48.446 06:58:10 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:48.446 06:58:10 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:48.705 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.642 06:58:11 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot dd578de5-6a12-49ea-b6f1-fa4b3048dbeb MY_SNAPSHOT 00:17:49.642 06:58:11 -- target/nvmf_lvol.sh@47 -- # snapshot=296ebb2e-ef11-401a-bda2-9fb6b0b04341 00:17:49.642 06:58:11 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize dd578de5-6a12-49ea-b6f1-fa4b3048dbeb 30 00:17:49.901 06:58:11 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 296ebb2e-ef11-401a-bda2-9fb6b0b04341 MY_CLONE 00:17:50.160 06:58:11 -- target/nvmf_lvol.sh@49 -- # clone=d500244c-1ea4-42cf-9a75-b1f9c4ab3e98 00:17:50.160 06:58:11 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d500244c-1ea4-42cf-9a75-b1f9c4ab3e98 00:17:50.419 06:58:11 -- target/nvmf_lvol.sh@53 -- # wait 1344699 00:18:00.401 Initializing NVMe Controllers 00:18:00.401 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:18:00.401 Controller IO queue size 128, less than required. 00:18:00.401 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:00.401 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:00.401 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:00.401 Initialization complete. Launching workers. 00:18:00.401 ======================================================== 00:18:00.401 Latency(us) 00:18:00.401 Device Information : IOPS MiB/s Average min max 00:18:00.401 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16718.70 65.31 7658.19 2083.04 35822.47 00:18:00.401 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16673.80 65.13 7678.20 3372.52 37327.89 00:18:00.401 ======================================================== 00:18:00.401 Total : 33392.50 130.44 7668.18 2083.04 37327.89 00:18:00.401 00:18:00.401 06:58:21 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:00.401 06:58:21 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dd578de5-6a12-49ea-b6f1-fa4b3048dbeb 00:18:00.401 06:58:21 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec6634dd-133e-49e4-bc86-c3b095758daa 00:18:00.401 06:58:22 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:00.401 06:58:22 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:00.401 06:58:22 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:00.401 06:58:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:00.401 06:58:22 -- nvmf/common.sh@116 -- # sync 00:18:00.401 06:58:22 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:00.401 06:58:22 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:00.401 06:58:22 -- nvmf/common.sh@119 -- # set +e 00:18:00.401 06:58:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:00.401 06:58:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:00.401 rmmod nvme_rdma 00:18:00.401 rmmod nvme_fabrics 00:18:00.660 06:58:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:00.660 06:58:22 -- nvmf/common.sh@123 -- # set -e 00:18:00.660 06:58:22 -- nvmf/common.sh@124 -- # return 0 00:18:00.660 06:58:22 -- nvmf/common.sh@477 -- # '[' -n 1344219 ']' 00:18:00.660 06:58:22 -- nvmf/common.sh@478 -- # killprocess 1344219 00:18:00.660 06:58:22 -- common/autotest_common.sh@936 -- # '[' -z 1344219 ']' 00:18:00.660 06:58:22 -- common/autotest_common.sh@940 -- # kill -0 1344219 00:18:00.661 06:58:22 -- common/autotest_common.sh@941 -- # uname 00:18:00.661 06:58:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.661 06:58:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1344219 00:18:00.661 06:58:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:00.661 06:58:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:00.661 06:58:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1344219' 00:18:00.661 killing process with pid 1344219 00:18:00.661 06:58:22 -- common/autotest_common.sh@955 -- # kill 1344219 00:18:00.661 06:58:22 -- common/autotest_common.sh@960 -- # wait 1344219 00:18:00.920 06:58:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:00.920 06:58:22 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:00.920 00:18:00.920 real 0m22.071s 00:18:00.920 user 1m11.770s 00:18:00.920 sys 0m6.276s 00:18:00.920 06:58:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:00.920 06:58:22 -- common/autotest_common.sh@10 -- # set +x 00:18:00.920 ************************************ 00:18:00.920 END TEST nvmf_lvol 00:18:00.920 ************************************ 00:18:00.920 06:58:22 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:00.920 06:58:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:00.920 06:58:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:00.920 06:58:22 -- common/autotest_common.sh@10 -- # set +x 00:18:00.920 ************************************ 00:18:00.920 START TEST nvmf_lvs_grow 00:18:00.920 ************************************ 00:18:00.920 06:58:22 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:00.920 * Looking for test storage... 00:18:00.920 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:00.920 06:58:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:00.920 06:58:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:00.920 06:58:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:01.180 06:58:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:01.180 06:58:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:01.180 06:58:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:01.180 06:58:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:01.180 06:58:22 -- scripts/common.sh@335 -- # IFS=.-: 00:18:01.180 06:58:22 -- scripts/common.sh@335 -- # read -ra ver1 00:18:01.180 06:58:22 -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.180 06:58:22 -- scripts/common.sh@336 -- # read -ra ver2 00:18:01.180 06:58:22 -- scripts/common.sh@337 -- # local 'op=<' 00:18:01.180 06:58:22 -- scripts/common.sh@339 -- # ver1_l=2 00:18:01.180 06:58:22 -- scripts/common.sh@340 -- # ver2_l=1 00:18:01.180 06:58:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:01.180 06:58:22 -- scripts/common.sh@343 -- # case "$op" in 00:18:01.180 06:58:22 -- scripts/common.sh@344 -- # : 1 00:18:01.180 06:58:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:01.180 06:58:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.180 06:58:22 -- scripts/common.sh@364 -- # decimal 1 00:18:01.180 06:58:22 -- scripts/common.sh@352 -- # local d=1 00:18:01.180 06:58:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.180 06:58:22 -- scripts/common.sh@354 -- # echo 1 00:18:01.180 06:58:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:01.180 06:58:22 -- scripts/common.sh@365 -- # decimal 2 00:18:01.180 06:58:22 -- scripts/common.sh@352 -- # local d=2 00:18:01.180 06:58:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.180 06:58:22 -- scripts/common.sh@354 -- # echo 2 00:18:01.180 06:58:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:01.180 06:58:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:01.180 06:58:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:01.180 06:58:22 -- scripts/common.sh@367 -- # return 0 00:18:01.180 06:58:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.180 06:58:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:01.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.180 --rc genhtml_branch_coverage=1 00:18:01.180 --rc genhtml_function_coverage=1 00:18:01.180 --rc genhtml_legend=1 00:18:01.180 --rc geninfo_all_blocks=1 00:18:01.180 --rc geninfo_unexecuted_blocks=1 00:18:01.180 00:18:01.180 ' 00:18:01.180 06:58:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:01.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.180 --rc genhtml_branch_coverage=1 00:18:01.180 --rc genhtml_function_coverage=1 00:18:01.180 --rc genhtml_legend=1 00:18:01.180 --rc geninfo_all_blocks=1 00:18:01.180 --rc geninfo_unexecuted_blocks=1 00:18:01.180 00:18:01.180 ' 00:18:01.180 06:58:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:01.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.180 --rc genhtml_branch_coverage=1 00:18:01.180 --rc genhtml_function_coverage=1 00:18:01.180 --rc genhtml_legend=1 00:18:01.180 --rc geninfo_all_blocks=1 00:18:01.180 --rc geninfo_unexecuted_blocks=1 00:18:01.180 00:18:01.180 ' 00:18:01.180 06:58:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:01.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.180 --rc genhtml_branch_coverage=1 00:18:01.180 --rc genhtml_function_coverage=1 00:18:01.180 --rc genhtml_legend=1 00:18:01.180 --rc geninfo_all_blocks=1 00:18:01.180 --rc geninfo_unexecuted_blocks=1 00:18:01.180 00:18:01.180 ' 00:18:01.180 06:58:22 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:01.180 06:58:22 -- nvmf/common.sh@7 -- # uname -s 00:18:01.180 06:58:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.180 06:58:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.180 06:58:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.180 06:58:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.180 06:58:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.180 06:58:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.180 06:58:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.180 06:58:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.180 06:58:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.180 06:58:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.180 06:58:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:01.180 06:58:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:01.180 06:58:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.180 06:58:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.180 06:58:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:01.180 06:58:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:01.180 06:58:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.180 06:58:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.180 06:58:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.180 06:58:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.180 06:58:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.180 06:58:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.180 06:58:22 -- paths/export.sh@5 -- # export PATH 00:18:01.180 06:58:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.180 06:58:22 -- nvmf/common.sh@46 -- # : 0 00:18:01.180 06:58:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:01.180 06:58:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:01.180 06:58:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:01.180 06:58:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.180 06:58:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.180 06:58:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:01.180 06:58:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:01.180 06:58:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:01.180 06:58:22 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:01.180 06:58:22 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.180 06:58:22 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:01.180 06:58:22 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:01.180 06:58:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.180 06:58:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:01.180 06:58:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:01.180 06:58:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:01.180 06:58:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.180 06:58:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.180 06:58:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.180 06:58:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:01.181 06:58:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:01.181 06:58:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:01.181 06:58:22 -- common/autotest_common.sh@10 -- # set +x 00:18:07.750 06:58:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:07.750 06:58:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:07.750 06:58:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:07.750 06:58:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:07.750 06:58:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:07.750 06:58:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:07.750 06:58:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:07.750 06:58:28 -- nvmf/common.sh@294 -- # net_devs=() 00:18:07.750 06:58:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:07.750 06:58:28 -- nvmf/common.sh@295 -- # e810=() 00:18:07.750 06:58:28 -- nvmf/common.sh@295 -- # local -ga e810 00:18:07.750 06:58:28 -- nvmf/common.sh@296 -- # x722=() 00:18:07.750 06:58:28 -- nvmf/common.sh@296 -- # local -ga x722 00:18:07.750 06:58:28 -- nvmf/common.sh@297 -- # mlx=() 00:18:07.750 06:58:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:07.750 06:58:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.750 06:58:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.750 06:58:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.750 06:58:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.750 06:58:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.750 06:58:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.750 06:58:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.750 06:58:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.750 06:58:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.750 06:58:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.750 06:58:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.750 06:58:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:07.750 06:58:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:07.750 06:58:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:07.750 06:58:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:07.750 06:58:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:07.750 06:58:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:07.750 06:58:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:07.750 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:07.750 06:58:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:07.750 06:58:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:07.750 06:58:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:07.750 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:07.750 06:58:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:07.750 06:58:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:07.750 06:58:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:07.750 06:58:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.750 06:58:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:07.750 06:58:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.750 06:58:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:07.750 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:07.750 06:58:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.750 06:58:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:07.750 06:58:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.750 06:58:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:07.750 06:58:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.750 06:58:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:07.750 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:07.750 06:58:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.750 06:58:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:07.750 06:58:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:07.750 06:58:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:07.750 06:58:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:07.750 06:58:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:07.750 06:58:28 -- nvmf/common.sh@57 -- # uname 00:18:07.750 06:58:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:07.750 06:58:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:07.750 06:58:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:07.750 06:58:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:07.750 06:58:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:07.750 06:58:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:07.750 06:58:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:07.750 06:58:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:07.750 06:58:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:07.751 06:58:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:07.751 06:58:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:07.751 06:58:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:07.751 06:58:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:07.751 06:58:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:07.751 06:58:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:07.751 06:58:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:07.751 06:58:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:07.751 06:58:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.751 06:58:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:07.751 06:58:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:07.751 06:58:28 -- nvmf/common.sh@104 -- # continue 2 00:18:07.751 06:58:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:07.751 06:58:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.751 06:58:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:07.751 06:58:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.751 06:58:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:07.751 06:58:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:07.751 06:58:28 -- nvmf/common.sh@104 -- # continue 2 00:18:07.751 06:58:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:07.751 06:58:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:07.751 06:58:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:07.751 06:58:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:07.751 06:58:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:07.751 06:58:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:07.751 06:58:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:07.751 06:58:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:07.751 06:58:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:07.751 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:07.751 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:07.751 altname enp217s0f0np0 00:18:07.751 altname ens818f0np0 00:18:07.751 inet 192.168.100.8/24 scope global mlx_0_0 00:18:07.751 valid_lft forever preferred_lft forever 00:18:07.751 06:58:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:07.751 06:58:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:07.751 06:58:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:07.751 06:58:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:07.751 06:58:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:07.751 06:58:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:07.751 06:58:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:07.751 06:58:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:07.751 06:58:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:07.751 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:07.751 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:07.751 altname enp217s0f1np1 00:18:07.751 altname ens818f1np1 00:18:07.751 inet 192.168.100.9/24 scope global mlx_0_1 00:18:07.751 valid_lft forever preferred_lft forever 00:18:07.751 06:58:28 -- nvmf/common.sh@410 -- # return 0 00:18:07.751 06:58:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:07.751 06:58:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:07.751 06:58:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:07.751 06:58:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:07.751 06:58:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:07.751 06:58:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:07.751 06:58:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:07.751 06:58:29 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:07.751 06:58:29 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:07.751 06:58:29 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:07.751 06:58:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:07.751 06:58:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.751 06:58:29 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:07.751 06:58:29 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:07.751 06:58:29 -- nvmf/common.sh@104 -- # continue 2 00:18:07.751 06:58:29 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:07.751 06:58:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.751 06:58:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:07.751 06:58:29 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.751 06:58:29 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:07.751 06:58:29 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:07.751 06:58:29 -- nvmf/common.sh@104 -- # continue 2 00:18:07.751 06:58:29 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:07.751 06:58:29 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:07.751 06:58:29 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:07.751 06:58:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:07.751 06:58:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:07.751 06:58:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:07.751 06:58:29 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:07.751 06:58:29 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:07.751 06:58:29 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:07.751 06:58:29 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:07.751 06:58:29 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:07.751 06:58:29 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:07.751 06:58:29 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:07.751 192.168.100.9' 00:18:07.751 06:58:29 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:07.751 192.168.100.9' 00:18:07.751 06:58:29 -- nvmf/common.sh@445 -- # head -n 1 00:18:07.751 06:58:29 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:07.751 06:58:29 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:07.751 192.168.100.9' 00:18:07.751 06:58:29 -- nvmf/common.sh@446 -- # tail -n +2 00:18:07.751 06:58:29 -- nvmf/common.sh@446 -- # head -n 1 00:18:07.751 06:58:29 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:07.751 06:58:29 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:07.751 06:58:29 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:07.751 06:58:29 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:07.751 06:58:29 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:07.751 06:58:29 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:07.751 06:58:29 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:07.751 06:58:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:07.751 06:58:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.751 06:58:29 -- common/autotest_common.sh@10 -- # set +x 00:18:07.751 06:58:29 -- nvmf/common.sh@469 -- # nvmfpid=1350150 00:18:07.751 06:58:29 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:07.751 06:58:29 -- nvmf/common.sh@470 -- # waitforlisten 1350150 00:18:07.751 06:58:29 -- common/autotest_common.sh@829 -- # '[' -z 1350150 ']' 00:18:07.751 06:58:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.751 06:58:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.751 06:58:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.751 06:58:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.751 06:58:29 -- common/autotest_common.sh@10 -- # set +x 00:18:07.751 [2024-12-15 06:58:29.156094] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:07.751 [2024-12-15 06:58:29.156143] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.751 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.751 [2024-12-15 06:58:29.227289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.751 [2024-12-15 06:58:29.262971] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:07.751 [2024-12-15 06:58:29.263107] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.751 [2024-12-15 06:58:29.263117] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.751 [2024-12-15 06:58:29.263126] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.751 [2024-12-15 06:58:29.263149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.688 06:58:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.688 06:58:29 -- common/autotest_common.sh@862 -- # return 0 00:18:08.688 06:58:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:08.688 06:58:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.688 06:58:29 -- common/autotest_common.sh@10 -- # set +x 00:18:08.688 06:58:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.688 06:58:30 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:08.688 [2024-12-15 06:58:30.193669] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f11240/0x1f156f0) succeed. 00:18:08.688 [2024-12-15 06:58:30.207124] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f126f0/0x1f56d90) succeed. 00:18:08.688 06:58:30 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:08.688 06:58:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:08.688 06:58:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:08.688 06:58:30 -- common/autotest_common.sh@10 -- # set +x 00:18:08.688 ************************************ 00:18:08.688 START TEST lvs_grow_clean 00:18:08.688 ************************************ 00:18:08.688 06:58:30 -- common/autotest_common.sh@1114 -- # lvs_grow 00:18:08.688 06:58:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:08.688 06:58:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:08.688 06:58:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:08.688 06:58:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:08.688 06:58:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:08.688 06:58:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:08.688 06:58:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:08.688 06:58:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:08.688 06:58:30 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:08.947 06:58:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:08.947 06:58:30 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:09.206 06:58:30 -- target/nvmf_lvs_grow.sh@28 -- # lvs=ea72ccab-d6af-4d1e-97c3-667644bafeca 00:18:09.206 06:58:30 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea72ccab-d6af-4d1e-97c3-667644bafeca 00:18:09.206 06:58:30 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:09.465 06:58:30 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:09.465 06:58:30 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:09.465 06:58:30 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ea72ccab-d6af-4d1e-97c3-667644bafeca lvol 150 00:18:09.465 06:58:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=2d502eb4-5634-428a-837a-5dbe1394bdda 00:18:09.465 06:58:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:09.465 06:58:31 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:09.724 [2024-12-15 06:58:31.218771] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:09.724 [2024-12-15 06:58:31.218822] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:09.724 true 00:18:09.724 06:58:31 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea72ccab-d6af-4d1e-97c3-667644bafeca 00:18:09.724 06:58:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:09.983 06:58:31 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:09.983 06:58:31 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:09.983 06:58:31 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2d502eb4-5634-428a-837a-5dbe1394bdda 00:18:10.242 06:58:31 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:10.502 [2024-12-15 06:58:31.901017] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:10.502 06:58:31 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:10.502 06:58:32 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1350642 00:18:10.502 06:58:32 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.502 06:58:32 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:10.502 06:58:32 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1350642 /var/tmp/bdevperf.sock 00:18:10.502 06:58:32 -- common/autotest_common.sh@829 -- # '[' -z 1350642 ']' 00:18:10.502 06:58:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.502 06:58:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.502 06:58:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.502 06:58:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.502 06:58:32 -- common/autotest_common.sh@10 -- # set +x 00:18:10.502 [2024-12-15 06:58:32.116256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:10.502 [2024-12-15 06:58:32.116310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350642 ] 00:18:10.761 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.761 [2024-12-15 06:58:32.187996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.761 [2024-12-15 06:58:32.225379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.329 06:58:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.329 06:58:32 -- common/autotest_common.sh@862 -- # return 0 00:18:11.329 06:58:32 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:11.588 Nvme0n1 00:18:11.589 06:58:33 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:11.848 [ 00:18:11.848 { 00:18:11.848 "name": "Nvme0n1", 00:18:11.848 "aliases": [ 00:18:11.848 "2d502eb4-5634-428a-837a-5dbe1394bdda" 00:18:11.848 ], 00:18:11.848 "product_name": "NVMe disk", 00:18:11.848 "block_size": 4096, 00:18:11.848 "num_blocks": 38912, 00:18:11.848 "uuid": "2d502eb4-5634-428a-837a-5dbe1394bdda", 00:18:11.848 "assigned_rate_limits": { 00:18:11.848 "rw_ios_per_sec": 0, 00:18:11.848 "rw_mbytes_per_sec": 0, 00:18:11.848 "r_mbytes_per_sec": 0, 00:18:11.848 "w_mbytes_per_sec": 0 00:18:11.848 }, 00:18:11.848 "claimed": false, 00:18:11.848 "zoned": false, 00:18:11.848 "supported_io_types": { 00:18:11.848 "read": true, 00:18:11.848 "write": true, 00:18:11.848 "unmap": true, 00:18:11.848 "write_zeroes": true, 00:18:11.848 "flush": true, 00:18:11.848 "reset": true, 00:18:11.848 "compare": true, 00:18:11.848 "compare_and_write": true, 00:18:11.848 "abort": true, 00:18:11.848 "nvme_admin": true, 00:18:11.848 "nvme_io": true 00:18:11.848 }, 00:18:11.848 "memory_domains": [ 00:18:11.848 { 00:18:11.848 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:11.848 "dma_device_type": 0 00:18:11.848 } 00:18:11.848 ], 00:18:11.848 "driver_specific": { 00:18:11.848 "nvme": [ 00:18:11.848 { 00:18:11.848 "trid": { 00:18:11.848 "trtype": "RDMA", 00:18:11.848 "adrfam": "IPv4", 00:18:11.848 "traddr": "192.168.100.8", 00:18:11.848 "trsvcid": "4420", 00:18:11.848 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:11.848 }, 00:18:11.848 "ctrlr_data": { 00:18:11.848 "cntlid": 1, 00:18:11.848 "vendor_id": "0x8086", 00:18:11.848 "model_number": "SPDK bdev Controller", 00:18:11.848 "serial_number": "SPDK0", 00:18:11.848 "firmware_revision": "24.01.1", 00:18:11.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:11.848 "oacs": { 00:18:11.848 "security": 0, 00:18:11.848 "format": 0, 00:18:11.848 "firmware": 0, 00:18:11.848 "ns_manage": 0 00:18:11.848 }, 00:18:11.848 "multi_ctrlr": true, 00:18:11.848 "ana_reporting": false 00:18:11.848 }, 00:18:11.848 "vs": { 00:18:11.848 "nvme_version": "1.3" 00:18:11.848 }, 00:18:11.848 "ns_data": { 00:18:11.848 "id": 1, 00:18:11.848 "can_share": true 00:18:11.848 } 00:18:11.848 } 00:18:11.848 ], 00:18:11.848 "mp_policy": "active_passive" 00:18:11.848 } 00:18:11.848 } 00:18:11.848 ] 00:18:11.848 06:58:33 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1350881 00:18:11.848 06:58:33 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:11.848 06:58:33 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:11.848 Running I/O for 10 seconds... 00:18:13.228 Latency(us) 00:18:13.228 [2024-12-15T05:58:34.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.228 [2024-12-15T05:58:34.869Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.228 Nvme0n1 : 1.00 36642.00 143.13 0.00 0.00 0.00 0.00 0.00 00:18:13.228 [2024-12-15T05:58:34.869Z] =================================================================================================================== 00:18:13.228 [2024-12-15T05:58:34.869Z] Total : 36642.00 143.13 0.00 0.00 0.00 0.00 0.00 00:18:13.228 00:18:13.796 06:58:35 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ea72ccab-d6af-4d1e-97c3-667644bafeca 00:18:14.055 [2024-12-15T05:58:35.696Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.055 Nvme0n1 : 2.00 36976.50 144.44 0.00 0.00 0.00 0.00 0.00 00:18:14.055 [2024-12-15T05:58:35.696Z] =================================================================================================================== 00:18:14.055 [2024-12-15T05:58:35.696Z] Total : 36976.50 144.44 0.00 0.00 0.00 0.00 0.00 00:18:14.055 00:18:14.055 true 00:18:14.055 06:58:35 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea72ccab-d6af-4d1e-97c3-667644bafeca 00:18:14.055 06:58:35 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:14.314 06:58:35 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:14.314 06:58:35 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:14.314 06:58:35 -- target/nvmf_lvs_grow.sh@65 -- # wait 1350881 00:18:14.882 [2024-12-15T05:58:36.523Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.882 Nvme0n1 : 3.00 37088.00 144.88 0.00 0.00 0.00 0.00 0.00 00:18:14.882 [2024-12-15T05:58:36.523Z] =================================================================================================================== 00:18:14.882 [2024-12-15T05:58:36.523Z] Total : 37088.00 144.88 0.00 0.00 0.00 0.00 0.00 00:18:14.882 00:18:15.843 [2024-12-15T05:58:37.484Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:15.843 Nvme0n1 : 4.00 37208.75 145.35 0.00 0.00 0.00 0.00 0.00 00:18:15.843 [2024-12-15T05:58:37.484Z] =================================================================================================================== 00:18:15.843 [2024-12-15T05:58:37.484Z] Total : 37208.75 145.35 0.00 0.00 0.00 0.00 0.00 00:18:15.843 00:18:16.834 [2024-12-15T05:58:38.475Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.834 Nvme0n1 : 5.00 37280.40 145.63 0.00 0.00 0.00 0.00 0.00 00:18:16.834 [2024-12-15T05:58:38.475Z] =================================================================================================================== 00:18:16.834 [2024-12-15T05:58:38.476Z] Total : 37280.40 145.63 0.00 0.00 0.00 0.00 0.00 00:18:16.835 00:18:18.213 [2024-12-15T05:58:39.854Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:18.213 Nvme0n1 : 6.00 37350.00 145.90 0.00 0.00 0.00 0.00 0.00 00:18:18.213 [2024-12-15T05:58:39.854Z] =================================================================================================================== 00:18:18.213 [2024-12-15T05:58:39.854Z] Total : 37350.00 145.90 0.00 0.00 0.00 0.00 0.00 00:18:18.213 00:18:19.150 [2024-12-15T05:58:40.791Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:19.150 Nvme0n1 : 7.00 37349.29 145.90 0.00 0.00 0.00 0.00 0.00 00:18:19.150 [2024-12-15T05:58:40.791Z] =================================================================================================================== 00:18:19.150 [2024-12-15T05:58:40.791Z] Total : 37349.29 145.90 0.00 0.00 0.00 0.00 0.00 00:18:19.150 00:18:20.085 [2024-12-15T05:58:41.726Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:20.085 Nvme0n1 : 8.00 37367.75 145.97 0.00 0.00 0.00 0.00 0.00 00:18:20.085 [2024-12-15T05:58:41.726Z] =================================================================================================================== 00:18:20.085 [2024-12-15T05:58:41.726Z] Total : 37367.75 145.97 0.00 0.00 0.00 0.00 0.00 00:18:20.085 00:18:21.022 [2024-12-15T05:58:42.663Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.022 Nvme0n1 : 9.00 37404.11 146.11 0.00 0.00 0.00 0.00 0.00 00:18:21.022 [2024-12-15T05:58:42.663Z] =================================================================================================================== 00:18:21.022 [2024-12-15T05:58:42.663Z] Total : 37404.11 146.11 0.00 0.00 0.00 0.00 0.00 00:18:21.022 00:18:21.960 [2024-12-15T05:58:43.601Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.960 Nvme0n1 : 10.00 37433.20 146.22 0.00 0.00 0.00 0.00 0.00 00:18:21.960 [2024-12-15T05:58:43.601Z] =================================================================================================================== 00:18:21.960 [2024-12-15T05:58:43.601Z] Total : 37433.20 146.22 0.00 0.00 0.00 0.00 0.00 00:18:21.960 00:18:21.960 00:18:21.960 Latency(us) 00:18:21.960 [2024-12-15T05:58:43.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.960 [2024-12-15T05:58:43.601Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.960 Nvme0n1 : 10.00 37433.30 146.22 0.00 0.00 3417.00 2280.65 10118.76 00:18:21.960 [2024-12-15T05:58:43.601Z] =================================================================================================================== 00:18:21.960 [2024-12-15T05:58:43.601Z] Total : 37433.30 146.22 0.00 0.00 3417.00 2280.65 10118.76 00:18:21.960 0 00:18:21.960 06:58:43 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1350642 00:18:21.960 06:58:43 -- common/autotest_common.sh@936 -- # '[' -z 1350642 ']' 00:18:21.960 06:58:43 -- common/autotest_common.sh@940 -- # kill -0 1350642 00:18:21.960 06:58:43 -- common/autotest_common.sh@941 -- # uname 00:18:21.960 06:58:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:21.960 06:58:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1350642 00:18:21.960 06:58:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:21.960 06:58:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:21.960 06:58:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1350642' 00:18:21.960 killing process with pid 1350642 00:18:21.960 06:58:43 -- common/autotest_common.sh@955 -- # kill 1350642 00:18:21.960 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.960 00:18:21.960 Latency(us) 00:18:21.960 [2024-12-15T05:58:43.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.960 [2024-12-15T05:58:43.601Z] =================================================================================================================== 00:18:21.960 [2024-12-15T05:58:43.601Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.960 06:58:43 -- common/autotest_common.sh@960 -- # wait 1350642 00:18:22.219 06:58:43 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:22.479 06:58:43 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea72ccab-d6af-4d1e-97c3-667644bafeca 00:18:22.479 06:58:43 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:22.479 06:58:44 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:22.479 06:58:44 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:22.479 06:58:44 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:22.739 [2024-12-15 06:58:44.245714] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:22.739 06:58:44 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea72ccab-d6af-4d1e-97c3-667644bafeca 00:18:22.739 06:58:44 -- common/autotest_common.sh@650 -- # local es=0 00:18:22.739 06:58:44 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea72ccab-d6af-4d1e-97c3-667644bafeca 00:18:22.739 06:58:44 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:22.739 06:58:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.739 06:58:44 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:22.739 06:58:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.739 06:58:44 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:22.739 06:58:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.739 06:58:44 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:22.739 06:58:44 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:22.739 06:58:44 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea72ccab-d6af-4d1e-97c3-667644bafeca 00:18:22.998 request: 00:18:22.998 { 00:18:22.998 "uuid": "ea72ccab-d6af-4d1e-97c3-667644bafeca", 00:18:22.998 "method": "bdev_lvol_get_lvstores", 00:18:22.998 "req_id": 1 00:18:22.998 } 00:18:22.998 Got JSON-RPC error response 00:18:22.998 response: 00:18:22.998 { 00:18:22.998 "code": -19, 00:18:22.998 "message": "No such device" 00:18:22.998 } 00:18:22.998 06:58:44 -- common/autotest_common.sh@653 -- # es=1 00:18:22.998 06:58:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.998 06:58:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.998 06:58:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.998 06:58:44 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:22.998 aio_bdev 00:18:23.257 06:58:44 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 2d502eb4-5634-428a-837a-5dbe1394bdda 00:18:23.257 06:58:44 -- common/autotest_common.sh@897 -- # local bdev_name=2d502eb4-5634-428a-837a-5dbe1394bdda 00:18:23.257 06:58:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:23.257 06:58:44 -- common/autotest_common.sh@899 -- # local i 00:18:23.257 06:58:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:23.257 06:58:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:23.257 06:58:44 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:23.257 06:58:44 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2d502eb4-5634-428a-837a-5dbe1394bdda -t 2000 00:18:23.516 [ 00:18:23.516 { 00:18:23.516 "name": "2d502eb4-5634-428a-837a-5dbe1394bdda", 00:18:23.516 "aliases": [ 00:18:23.516 "lvs/lvol" 00:18:23.516 ], 00:18:23.516 "product_name": "Logical Volume", 00:18:23.516 "block_size": 4096, 00:18:23.516 "num_blocks": 38912, 00:18:23.516 "uuid": "2d502eb4-5634-428a-837a-5dbe1394bdda", 00:18:23.516 "assigned_rate_limits": { 00:18:23.516 "rw_ios_per_sec": 0, 00:18:23.516 "rw_mbytes_per_sec": 0, 00:18:23.516 "r_mbytes_per_sec": 0, 00:18:23.516 "w_mbytes_per_sec": 0 00:18:23.516 }, 00:18:23.516 "claimed": false, 00:18:23.516 "zoned": false, 00:18:23.516 "supported_io_types": { 00:18:23.516 "read": true, 00:18:23.516 "write": true, 00:18:23.516 "unmap": true, 00:18:23.516 "write_zeroes": true, 00:18:23.516 "flush": false, 00:18:23.516 "reset": true, 00:18:23.516 "compare": false, 00:18:23.516 "compare_and_write": false, 00:18:23.516 "abort": false, 00:18:23.516 "nvme_admin": false, 00:18:23.516 "nvme_io": false 00:18:23.516 }, 00:18:23.516 "driver_specific": { 00:18:23.516 "lvol": { 00:18:23.516 "lvol_store_uuid": "ea72ccab-d6af-4d1e-97c3-667644bafeca", 00:18:23.516 "base_bdev": "aio_bdev", 00:18:23.516 "thin_provision": false, 00:18:23.516 "snapshot": false, 00:18:23.516 "clone": false, 00:18:23.516 "esnap_clone": false 00:18:23.516 } 00:18:23.516 } 00:18:23.516 } 00:18:23.516 ] 00:18:23.516 06:58:44 -- common/autotest_common.sh@905 -- # return 0 00:18:23.516 06:58:44 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea72ccab-d6af-4d1e-97c3-667644bafeca 00:18:23.516 06:58:44 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:23.516 06:58:45 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:23.517 06:58:45 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea72ccab-d6af-4d1e-97c3-667644bafeca 00:18:23.517 06:58:45 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:23.775 06:58:45 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:23.775 06:58:45 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2d502eb4-5634-428a-837a-5dbe1394bdda 00:18:24.034 06:58:45 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ea72ccab-d6af-4d1e-97c3-667644bafeca 00:18:24.034 06:58:45 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:24.293 06:58:45 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:24.293 00:18:24.293 real 0m15.601s 00:18:24.293 user 0m15.486s 00:18:24.293 sys 0m1.133s 00:18:24.293 06:58:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:24.293 06:58:45 -- common/autotest_common.sh@10 -- # set +x 00:18:24.293 ************************************ 00:18:24.293 END TEST lvs_grow_clean 00:18:24.293 ************************************ 00:18:24.293 06:58:45 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:24.293 06:58:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:24.293 06:58:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:24.293 06:58:45 -- common/autotest_common.sh@10 -- # set +x 00:18:24.293 ************************************ 00:18:24.293 START TEST lvs_grow_dirty 00:18:24.293 ************************************ 00:18:24.293 06:58:45 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:18:24.293 06:58:45 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:24.293 06:58:45 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:24.293 06:58:45 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:24.293 06:58:45 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:24.293 06:58:45 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:24.293 06:58:45 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:24.293 06:58:45 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:24.293 06:58:45 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:24.553 06:58:45 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:24.553 06:58:46 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:24.553 06:58:46 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:24.812 06:58:46 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:24.812 06:58:46 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:24.813 06:58:46 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:25.072 06:58:46 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:25.072 06:58:46 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:25.072 06:58:46 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f lvol 150 00:18:25.072 06:58:46 -- target/nvmf_lvs_grow.sh@33 -- # lvol=49f07c4f-7f13-490a-9700-327ef885d777 00:18:25.072 06:58:46 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:25.072 06:58:46 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:25.331 [2024-12-15 06:58:46.782585] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:25.331 [2024-12-15 06:58:46.782637] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:25.331 true 00:18:25.331 06:58:46 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:25.331 06:58:46 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:25.590 06:58:46 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:25.590 06:58:46 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:25.590 06:58:47 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 49f07c4f-7f13-490a-9700-327ef885d777 00:18:25.850 06:58:47 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:25.850 06:58:47 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:26.109 06:58:47 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1353370 00:18:26.109 06:58:47 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:26.109 06:58:47 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1353370 /var/tmp/bdevperf.sock 00:18:26.109 06:58:47 -- common/autotest_common.sh@829 -- # '[' -z 1353370 ']' 00:18:26.109 06:58:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.109 06:58:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.109 06:58:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.109 06:58:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.109 06:58:47 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:26.109 06:58:47 -- common/autotest_common.sh@10 -- # set +x 00:18:26.109 [2024-12-15 06:58:47.678694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:26.109 [2024-12-15 06:58:47.678749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353370 ] 00:18:26.109 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.109 [2024-12-15 06:58:47.748744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.369 [2024-12-15 06:58:47.785684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.937 06:58:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.937 06:58:48 -- common/autotest_common.sh@862 -- # return 0 00:18:26.937 06:58:48 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:27.196 Nvme0n1 00:18:27.196 06:58:48 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:27.456 [ 00:18:27.456 { 00:18:27.456 "name": "Nvme0n1", 00:18:27.456 "aliases": [ 00:18:27.456 "49f07c4f-7f13-490a-9700-327ef885d777" 00:18:27.456 ], 00:18:27.456 "product_name": "NVMe disk", 00:18:27.456 "block_size": 4096, 00:18:27.456 "num_blocks": 38912, 00:18:27.456 "uuid": "49f07c4f-7f13-490a-9700-327ef885d777", 00:18:27.456 "assigned_rate_limits": { 00:18:27.456 "rw_ios_per_sec": 0, 00:18:27.456 "rw_mbytes_per_sec": 0, 00:18:27.456 "r_mbytes_per_sec": 0, 00:18:27.456 "w_mbytes_per_sec": 0 00:18:27.456 }, 00:18:27.456 "claimed": false, 00:18:27.456 "zoned": false, 00:18:27.456 "supported_io_types": { 00:18:27.456 "read": true, 00:18:27.456 "write": true, 00:18:27.456 "unmap": true, 00:18:27.456 "write_zeroes": true, 00:18:27.456 "flush": true, 00:18:27.456 "reset": true, 00:18:27.456 "compare": true, 00:18:27.456 "compare_and_write": true, 00:18:27.456 "abort": true, 00:18:27.456 "nvme_admin": true, 00:18:27.456 "nvme_io": true 00:18:27.456 }, 00:18:27.456 "memory_domains": [ 00:18:27.456 { 00:18:27.456 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:27.456 "dma_device_type": 0 00:18:27.456 } 00:18:27.456 ], 00:18:27.456 "driver_specific": { 00:18:27.456 "nvme": [ 00:18:27.456 { 00:18:27.456 "trid": { 00:18:27.456 "trtype": "RDMA", 00:18:27.456 "adrfam": "IPv4", 00:18:27.456 "traddr": "192.168.100.8", 00:18:27.456 "trsvcid": "4420", 00:18:27.456 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:27.456 }, 00:18:27.456 "ctrlr_data": { 00:18:27.456 "cntlid": 1, 00:18:27.456 "vendor_id": "0x8086", 00:18:27.456 "model_number": "SPDK bdev Controller", 00:18:27.456 "serial_number": "SPDK0", 00:18:27.456 "firmware_revision": "24.01.1", 00:18:27.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:27.456 "oacs": { 00:18:27.456 "security": 0, 00:18:27.456 "format": 0, 00:18:27.456 "firmware": 0, 00:18:27.456 "ns_manage": 0 00:18:27.456 }, 00:18:27.456 "multi_ctrlr": true, 00:18:27.456 "ana_reporting": false 00:18:27.456 }, 00:18:27.456 "vs": { 00:18:27.456 "nvme_version": "1.3" 00:18:27.456 }, 00:18:27.456 "ns_data": { 00:18:27.456 "id": 1, 00:18:27.456 "can_share": true 00:18:27.456 } 00:18:27.456 } 00:18:27.456 ], 00:18:27.456 "mp_policy": "active_passive" 00:18:27.456 } 00:18:27.456 } 00:18:27.456 ] 00:18:27.456 06:58:48 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1353640 00:18:27.456 06:58:48 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:27.456 06:58:48 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:27.456 Running I/O for 10 seconds... 00:18:28.394 Latency(us) 00:18:28.394 [2024-12-15T05:58:50.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.394 [2024-12-15T05:58:50.035Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:28.394 Nvme0n1 : 1.00 36731.00 143.48 0.00 0.00 0.00 0.00 0.00 00:18:28.394 [2024-12-15T05:58:50.035Z] =================================================================================================================== 00:18:28.394 [2024-12-15T05:58:50.035Z] Total : 36731.00 143.48 0.00 0.00 0.00 0.00 0.00 00:18:28.394 00:18:29.331 06:58:50 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:29.590 [2024-12-15T05:58:51.231Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.590 Nvme0n1 : 2.00 36669.00 143.24 0.00 0.00 0.00 0.00 0.00 00:18:29.590 [2024-12-15T05:58:51.231Z] =================================================================================================================== 00:18:29.590 [2024-12-15T05:58:51.231Z] Total : 36669.00 143.24 0.00 0.00 0.00 0.00 0.00 00:18:29.590 00:18:29.590 true 00:18:29.590 06:58:51 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:29.590 06:58:51 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:29.850 06:58:51 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:29.850 06:58:51 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:29.850 06:58:51 -- target/nvmf_lvs_grow.sh@65 -- # wait 1353640 00:18:30.418 [2024-12-15T05:58:52.059Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:30.418 Nvme0n1 : 3.00 36898.67 144.14 0.00 0.00 0.00 0.00 0.00 00:18:30.418 [2024-12-15T05:58:52.059Z] =================================================================================================================== 00:18:30.418 [2024-12-15T05:58:52.059Z] Total : 36898.67 144.14 0.00 0.00 0.00 0.00 0.00 00:18:30.418 00:18:31.798 [2024-12-15T05:58:53.439Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:31.798 Nvme0n1 : 4.00 37078.50 144.84 0.00 0.00 0.00 0.00 0.00 00:18:31.798 [2024-12-15T05:58:53.439Z] =================================================================================================================== 00:18:31.798 [2024-12-15T05:58:53.439Z] Total : 37078.50 144.84 0.00 0.00 0.00 0.00 0.00 00:18:31.798 00:18:32.367 [2024-12-15T05:58:54.008Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.367 Nvme0n1 : 5.00 37208.20 145.34 0.00 0.00 0.00 0.00 0.00 00:18:32.367 [2024-12-15T05:58:54.008Z] =================================================================================================================== 00:18:32.367 [2024-12-15T05:58:54.008Z] Total : 37208.20 145.34 0.00 0.00 0.00 0.00 0.00 00:18:32.367 00:18:33.745 [2024-12-15T05:58:55.386Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.745 Nvme0n1 : 6.00 37284.00 145.64 0.00 0.00 0.00 0.00 0.00 00:18:33.745 [2024-12-15T05:58:55.386Z] =================================================================================================================== 00:18:33.745 [2024-12-15T05:58:55.386Z] Total : 37284.00 145.64 0.00 0.00 0.00 0.00 0.00 00:18:33.745 00:18:34.681 [2024-12-15T05:58:56.322Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.681 Nvme0n1 : 7.00 37354.14 145.91 0.00 0.00 0.00 0.00 0.00 00:18:34.681 [2024-12-15T05:58:56.322Z] =================================================================================================================== 00:18:34.681 [2024-12-15T05:58:56.322Z] Total : 37354.14 145.91 0.00 0.00 0.00 0.00 0.00 00:18:34.681 00:18:35.618 [2024-12-15T05:58:57.259Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.618 Nvme0n1 : 8.00 37404.75 146.11 0.00 0.00 0.00 0.00 0.00 00:18:35.618 [2024-12-15T05:58:57.259Z] =================================================================================================================== 00:18:35.618 [2024-12-15T05:58:57.259Z] Total : 37404.75 146.11 0.00 0.00 0.00 0.00 0.00 00:18:35.618 00:18:36.556 [2024-12-15T05:58:58.197Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.556 Nvme0n1 : 9.00 37439.00 146.25 0.00 0.00 0.00 0.00 0.00 00:18:36.556 [2024-12-15T05:58:58.197Z] =================================================================================================================== 00:18:36.556 [2024-12-15T05:58:58.197Z] Total : 37439.00 146.25 0.00 0.00 0.00 0.00 0.00 00:18:36.556 00:18:37.495 [2024-12-15T05:58:59.136Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.495 Nvme0n1 : 10.00 37466.20 146.35 0.00 0.00 0.00 0.00 0.00 00:18:37.495 [2024-12-15T05:58:59.136Z] =================================================================================================================== 00:18:37.495 [2024-12-15T05:58:59.136Z] Total : 37466.20 146.35 0.00 0.00 0.00 0.00 0.00 00:18:37.495 00:18:37.495 00:18:37.495 Latency(us) 00:18:37.495 [2024-12-15T05:58:59.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.495 [2024-12-15T05:58:59.136Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.495 Nvme0n1 : 10.00 37466.79 146.35 0.00 0.00 3413.82 2372.40 7864.32 00:18:37.495 [2024-12-15T05:58:59.136Z] =================================================================================================================== 00:18:37.495 [2024-12-15T05:58:59.136Z] Total : 37466.79 146.35 0.00 0.00 3413.82 2372.40 7864.32 00:18:37.495 0 00:18:37.495 06:58:59 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1353370 00:18:37.495 06:58:59 -- common/autotest_common.sh@936 -- # '[' -z 1353370 ']' 00:18:37.495 06:58:59 -- common/autotest_common.sh@940 -- # kill -0 1353370 00:18:37.495 06:58:59 -- common/autotest_common.sh@941 -- # uname 00:18:37.495 06:58:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:37.495 06:58:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1353370 00:18:37.495 06:58:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:37.495 06:58:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:37.495 06:58:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1353370' 00:18:37.495 killing process with pid 1353370 00:18:37.495 06:58:59 -- common/autotest_common.sh@955 -- # kill 1353370 00:18:37.495 Received shutdown signal, test time was about 10.000000 seconds 00:18:37.495 00:18:37.495 Latency(us) 00:18:37.495 [2024-12-15T05:58:59.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.495 [2024-12-15T05:58:59.136Z] =================================================================================================================== 00:18:37.495 [2024-12-15T05:58:59.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:37.495 06:58:59 -- common/autotest_common.sh@960 -- # wait 1353370 00:18:37.754 06:58:59 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:38.013 06:58:59 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:38.013 06:58:59 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:38.013 06:58:59 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:38.013 06:58:59 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:38.013 06:58:59 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1350150 00:18:38.013 06:58:59 -- target/nvmf_lvs_grow.sh@74 -- # wait 1350150 00:18:38.273 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1350150 Killed "${NVMF_APP[@]}" "$@" 00:18:38.273 06:58:59 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:38.273 06:58:59 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:38.273 06:58:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:38.273 06:58:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:38.273 06:58:59 -- common/autotest_common.sh@10 -- # set +x 00:18:38.273 06:58:59 -- nvmf/common.sh@469 -- # nvmfpid=1355534 00:18:38.273 06:58:59 -- nvmf/common.sh@470 -- # waitforlisten 1355534 00:18:38.273 06:58:59 -- common/autotest_common.sh@829 -- # '[' -z 1355534 ']' 00:18:38.273 06:58:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.273 06:58:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.273 06:58:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.273 06:58:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.273 06:58:59 -- common/autotest_common.sh@10 -- # set +x 00:18:38.273 06:58:59 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:38.273 [2024-12-15 06:58:59.744355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:38.273 [2024-12-15 06:58:59.744409] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.273 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.273 [2024-12-15 06:58:59.816185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.273 [2024-12-15 06:58:59.852051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:38.273 [2024-12-15 06:58:59.852157] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.273 [2024-12-15 06:58:59.852167] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.273 [2024-12-15 06:58:59.852176] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.273 [2024-12-15 06:58:59.852199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.211 06:59:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.211 06:59:00 -- common/autotest_common.sh@862 -- # return 0 00:18:39.211 06:59:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:39.211 06:59:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:39.211 06:59:00 -- common/autotest_common.sh@10 -- # set +x 00:18:39.211 06:59:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.211 06:59:00 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:39.211 [2024-12-15 06:59:00.768067] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:39.211 [2024-12-15 06:59:00.768148] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:39.211 [2024-12-15 06:59:00.768174] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:39.211 06:59:00 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:39.211 06:59:00 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 49f07c4f-7f13-490a-9700-327ef885d777 00:18:39.211 06:59:00 -- common/autotest_common.sh@897 -- # local bdev_name=49f07c4f-7f13-490a-9700-327ef885d777 00:18:39.211 06:59:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:39.211 06:59:00 -- common/autotest_common.sh@899 -- # local i 00:18:39.211 06:59:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:39.211 06:59:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:39.211 06:59:00 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:39.471 06:59:00 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 49f07c4f-7f13-490a-9700-327ef885d777 -t 2000 00:18:39.735 [ 00:18:39.735 { 00:18:39.735 "name": "49f07c4f-7f13-490a-9700-327ef885d777", 00:18:39.735 "aliases": [ 00:18:39.735 "lvs/lvol" 00:18:39.735 ], 00:18:39.735 "product_name": "Logical Volume", 00:18:39.735 "block_size": 4096, 00:18:39.735 "num_blocks": 38912, 00:18:39.735 "uuid": "49f07c4f-7f13-490a-9700-327ef885d777", 00:18:39.735 "assigned_rate_limits": { 00:18:39.735 "rw_ios_per_sec": 0, 00:18:39.735 "rw_mbytes_per_sec": 0, 00:18:39.735 "r_mbytes_per_sec": 0, 00:18:39.735 "w_mbytes_per_sec": 0 00:18:39.735 }, 00:18:39.735 "claimed": false, 00:18:39.735 "zoned": false, 00:18:39.735 "supported_io_types": { 00:18:39.735 "read": true, 00:18:39.735 "write": true, 00:18:39.735 "unmap": true, 00:18:39.735 "write_zeroes": true, 00:18:39.735 "flush": false, 00:18:39.735 "reset": true, 00:18:39.735 "compare": false, 00:18:39.735 "compare_and_write": false, 00:18:39.735 "abort": false, 00:18:39.735 "nvme_admin": false, 00:18:39.735 "nvme_io": false 00:18:39.735 }, 00:18:39.735 "driver_specific": { 00:18:39.735 "lvol": { 00:18:39.735 "lvol_store_uuid": "4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f", 00:18:39.735 "base_bdev": "aio_bdev", 00:18:39.735 "thin_provision": false, 00:18:39.735 "snapshot": false, 00:18:39.735 "clone": false, 00:18:39.735 "esnap_clone": false 00:18:39.735 } 00:18:39.735 } 00:18:39.735 } 00:18:39.735 ] 00:18:39.735 06:59:01 -- common/autotest_common.sh@905 -- # return 0 00:18:39.735 06:59:01 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:39.735 06:59:01 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:39.735 06:59:01 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:39.735 06:59:01 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:39.735 06:59:01 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:40.093 06:59:01 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:40.093 06:59:01 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:40.093 [2024-12-15 06:59:01.644376] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:40.093 06:59:01 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:40.093 06:59:01 -- common/autotest_common.sh@650 -- # local es=0 00:18:40.093 06:59:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:40.093 06:59:01 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:40.093 06:59:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.093 06:59:01 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:40.093 06:59:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.093 06:59:01 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:40.093 06:59:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.093 06:59:01 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:40.093 06:59:01 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:40.093 06:59:01 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:40.352 request: 00:18:40.352 { 00:18:40.352 "uuid": "4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f", 00:18:40.352 "method": "bdev_lvol_get_lvstores", 00:18:40.352 "req_id": 1 00:18:40.352 } 00:18:40.352 Got JSON-RPC error response 00:18:40.352 response: 00:18:40.352 { 00:18:40.352 "code": -19, 00:18:40.352 "message": "No such device" 00:18:40.352 } 00:18:40.352 06:59:01 -- common/autotest_common.sh@653 -- # es=1 00:18:40.352 06:59:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.352 06:59:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.352 06:59:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.352 06:59:01 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:40.611 aio_bdev 00:18:40.611 06:59:02 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 49f07c4f-7f13-490a-9700-327ef885d777 00:18:40.611 06:59:02 -- common/autotest_common.sh@897 -- # local bdev_name=49f07c4f-7f13-490a-9700-327ef885d777 00:18:40.611 06:59:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:40.611 06:59:02 -- common/autotest_common.sh@899 -- # local i 00:18:40.611 06:59:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:40.611 06:59:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:40.611 06:59:02 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:40.611 06:59:02 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 49f07c4f-7f13-490a-9700-327ef885d777 -t 2000 00:18:40.870 [ 00:18:40.870 { 00:18:40.870 "name": "49f07c4f-7f13-490a-9700-327ef885d777", 00:18:40.870 "aliases": [ 00:18:40.870 "lvs/lvol" 00:18:40.870 ], 00:18:40.870 "product_name": "Logical Volume", 00:18:40.870 "block_size": 4096, 00:18:40.870 "num_blocks": 38912, 00:18:40.870 "uuid": "49f07c4f-7f13-490a-9700-327ef885d777", 00:18:40.870 "assigned_rate_limits": { 00:18:40.870 "rw_ios_per_sec": 0, 00:18:40.870 "rw_mbytes_per_sec": 0, 00:18:40.870 "r_mbytes_per_sec": 0, 00:18:40.870 "w_mbytes_per_sec": 0 00:18:40.870 }, 00:18:40.870 "claimed": false, 00:18:40.870 "zoned": false, 00:18:40.870 "supported_io_types": { 00:18:40.870 "read": true, 00:18:40.870 "write": true, 00:18:40.870 "unmap": true, 00:18:40.870 "write_zeroes": true, 00:18:40.870 "flush": false, 00:18:40.870 "reset": true, 00:18:40.870 "compare": false, 00:18:40.870 "compare_and_write": false, 00:18:40.870 "abort": false, 00:18:40.870 "nvme_admin": false, 00:18:40.870 "nvme_io": false 00:18:40.870 }, 00:18:40.870 "driver_specific": { 00:18:40.870 "lvol": { 00:18:40.870 "lvol_store_uuid": "4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f", 00:18:40.870 "base_bdev": "aio_bdev", 00:18:40.870 "thin_provision": false, 00:18:40.870 "snapshot": false, 00:18:40.870 "clone": false, 00:18:40.870 "esnap_clone": false 00:18:40.870 } 00:18:40.870 } 00:18:40.870 } 00:18:40.870 ] 00:18:40.870 06:59:02 -- common/autotest_common.sh@905 -- # return 0 00:18:40.870 06:59:02 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:40.870 06:59:02 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:41.129 06:59:02 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:41.129 06:59:02 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:41.129 06:59:02 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:41.129 06:59:02 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:41.129 06:59:02 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 49f07c4f-7f13-490a-9700-327ef885d777 00:18:41.389 06:59:02 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ad9b776-7cdc-45f8-b08d-7a2a3c087a7f 00:18:41.648 06:59:03 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:41.648 06:59:03 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:41.907 00:18:41.907 real 0m17.374s 00:18:41.907 user 0m44.932s 00:18:41.907 sys 0m3.242s 00:18:41.907 06:59:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:41.907 06:59:03 -- common/autotest_common.sh@10 -- # set +x 00:18:41.907 ************************************ 00:18:41.907 END TEST lvs_grow_dirty 00:18:41.907 ************************************ 00:18:41.907 06:59:03 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:41.907 06:59:03 -- common/autotest_common.sh@806 -- # type=--id 00:18:41.907 06:59:03 -- common/autotest_common.sh@807 -- # id=0 00:18:41.907 06:59:03 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:41.907 06:59:03 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:41.907 06:59:03 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:41.907 06:59:03 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:41.907 06:59:03 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:41.907 06:59:03 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:41.907 nvmf_trace.0 00:18:41.907 06:59:03 -- common/autotest_common.sh@821 -- # return 0 00:18:41.907 06:59:03 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:41.907 06:59:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:41.907 06:59:03 -- nvmf/common.sh@116 -- # sync 00:18:41.907 06:59:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:41.907 06:59:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:41.907 06:59:03 -- nvmf/common.sh@119 -- # set +e 00:18:41.907 06:59:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:41.907 06:59:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:41.907 rmmod nvme_rdma 00:18:41.907 rmmod nvme_fabrics 00:18:41.907 06:59:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:41.907 06:59:03 -- nvmf/common.sh@123 -- # set -e 00:18:41.907 06:59:03 -- nvmf/common.sh@124 -- # return 0 00:18:41.907 06:59:03 -- nvmf/common.sh@477 -- # '[' -n 1355534 ']' 00:18:41.907 06:59:03 -- nvmf/common.sh@478 -- # killprocess 1355534 00:18:41.907 06:59:03 -- common/autotest_common.sh@936 -- # '[' -z 1355534 ']' 00:18:41.907 06:59:03 -- common/autotest_common.sh@940 -- # kill -0 1355534 00:18:41.907 06:59:03 -- common/autotest_common.sh@941 -- # uname 00:18:41.907 06:59:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:41.907 06:59:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1355534 00:18:41.907 06:59:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:41.907 06:59:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:41.907 06:59:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1355534' 00:18:41.907 killing process with pid 1355534 00:18:41.907 06:59:03 -- common/autotest_common.sh@955 -- # kill 1355534 00:18:41.907 06:59:03 -- common/autotest_common.sh@960 -- # wait 1355534 00:18:42.166 06:59:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:42.166 06:59:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:42.166 00:18:42.166 real 0m41.209s 00:18:42.166 user 1m6.551s 00:18:42.166 sys 0m9.826s 00:18:42.166 06:59:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:42.166 06:59:03 -- common/autotest_common.sh@10 -- # set +x 00:18:42.166 ************************************ 00:18:42.166 END TEST nvmf_lvs_grow 00:18:42.166 ************************************ 00:18:42.166 06:59:03 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:42.166 06:59:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:42.166 06:59:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:42.166 06:59:03 -- common/autotest_common.sh@10 -- # set +x 00:18:42.166 ************************************ 00:18:42.166 START TEST nvmf_bdev_io_wait 00:18:42.166 ************************************ 00:18:42.166 06:59:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:42.166 * Looking for test storage... 00:18:42.166 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:42.424 06:59:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:42.424 06:59:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:42.424 06:59:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:42.424 06:59:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:42.424 06:59:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:42.424 06:59:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:42.424 06:59:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:42.424 06:59:03 -- scripts/common.sh@335 -- # IFS=.-: 00:18:42.424 06:59:03 -- scripts/common.sh@335 -- # read -ra ver1 00:18:42.424 06:59:03 -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.424 06:59:03 -- scripts/common.sh@336 -- # read -ra ver2 00:18:42.424 06:59:03 -- scripts/common.sh@337 -- # local 'op=<' 00:18:42.424 06:59:03 -- scripts/common.sh@339 -- # ver1_l=2 00:18:42.424 06:59:03 -- scripts/common.sh@340 -- # ver2_l=1 00:18:42.424 06:59:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:42.424 06:59:03 -- scripts/common.sh@343 -- # case "$op" in 00:18:42.424 06:59:03 -- scripts/common.sh@344 -- # : 1 00:18:42.424 06:59:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:42.424 06:59:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.424 06:59:03 -- scripts/common.sh@364 -- # decimal 1 00:18:42.425 06:59:03 -- scripts/common.sh@352 -- # local d=1 00:18:42.425 06:59:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.425 06:59:03 -- scripts/common.sh@354 -- # echo 1 00:18:42.425 06:59:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:42.425 06:59:03 -- scripts/common.sh@365 -- # decimal 2 00:18:42.425 06:59:03 -- scripts/common.sh@352 -- # local d=2 00:18:42.425 06:59:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.425 06:59:03 -- scripts/common.sh@354 -- # echo 2 00:18:42.425 06:59:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:42.425 06:59:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:42.425 06:59:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:42.425 06:59:03 -- scripts/common.sh@367 -- # return 0 00:18:42.425 06:59:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.425 06:59:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:42.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.425 --rc genhtml_branch_coverage=1 00:18:42.425 --rc genhtml_function_coverage=1 00:18:42.425 --rc genhtml_legend=1 00:18:42.425 --rc geninfo_all_blocks=1 00:18:42.425 --rc geninfo_unexecuted_blocks=1 00:18:42.425 00:18:42.425 ' 00:18:42.425 06:59:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:42.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.425 --rc genhtml_branch_coverage=1 00:18:42.425 --rc genhtml_function_coverage=1 00:18:42.425 --rc genhtml_legend=1 00:18:42.425 --rc geninfo_all_blocks=1 00:18:42.425 --rc geninfo_unexecuted_blocks=1 00:18:42.425 00:18:42.425 ' 00:18:42.425 06:59:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:42.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.425 --rc genhtml_branch_coverage=1 00:18:42.425 --rc genhtml_function_coverage=1 00:18:42.425 --rc genhtml_legend=1 00:18:42.425 --rc geninfo_all_blocks=1 00:18:42.425 --rc geninfo_unexecuted_blocks=1 00:18:42.425 00:18:42.425 ' 00:18:42.425 06:59:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:42.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.425 --rc genhtml_branch_coverage=1 00:18:42.425 --rc genhtml_function_coverage=1 00:18:42.425 --rc genhtml_legend=1 00:18:42.425 --rc geninfo_all_blocks=1 00:18:42.425 --rc geninfo_unexecuted_blocks=1 00:18:42.425 00:18:42.425 ' 00:18:42.425 06:59:03 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.425 06:59:03 -- nvmf/common.sh@7 -- # uname -s 00:18:42.425 06:59:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.425 06:59:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.425 06:59:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.425 06:59:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.425 06:59:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.425 06:59:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.425 06:59:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.425 06:59:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.425 06:59:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.425 06:59:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.425 06:59:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:42.425 06:59:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:42.425 06:59:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.425 06:59:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.425 06:59:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.425 06:59:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:42.425 06:59:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.425 06:59:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.425 06:59:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.425 06:59:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.425 06:59:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.425 06:59:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.425 06:59:03 -- paths/export.sh@5 -- # export PATH 00:18:42.425 06:59:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.425 06:59:03 -- nvmf/common.sh@46 -- # : 0 00:18:42.425 06:59:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:42.425 06:59:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:42.425 06:59:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:42.425 06:59:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.425 06:59:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.425 06:59:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:42.425 06:59:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:42.425 06:59:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:42.425 06:59:03 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.425 06:59:03 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.425 06:59:03 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:42.425 06:59:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:42.425 06:59:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.425 06:59:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:42.425 06:59:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:42.425 06:59:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:42.425 06:59:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.425 06:59:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.425 06:59:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.425 06:59:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:42.425 06:59:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:42.425 06:59:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:42.425 06:59:03 -- common/autotest_common.sh@10 -- # set +x 00:18:48.995 06:59:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:48.995 06:59:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:48.995 06:59:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:48.995 06:59:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:48.995 06:59:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:48.995 06:59:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:48.995 06:59:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:48.995 06:59:10 -- nvmf/common.sh@294 -- # net_devs=() 00:18:48.995 06:59:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:48.995 06:59:10 -- nvmf/common.sh@295 -- # e810=() 00:18:48.995 06:59:10 -- nvmf/common.sh@295 -- # local -ga e810 00:18:48.995 06:59:10 -- nvmf/common.sh@296 -- # x722=() 00:18:48.995 06:59:10 -- nvmf/common.sh@296 -- # local -ga x722 00:18:48.995 06:59:10 -- nvmf/common.sh@297 -- # mlx=() 00:18:48.995 06:59:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:48.995 06:59:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.995 06:59:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.995 06:59:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.995 06:59:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.995 06:59:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.995 06:59:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.995 06:59:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.995 06:59:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.995 06:59:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.995 06:59:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.995 06:59:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.995 06:59:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:48.995 06:59:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:48.995 06:59:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:48.995 06:59:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:48.995 06:59:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:48.995 06:59:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:48.995 06:59:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:48.995 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:48.995 06:59:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:48.995 06:59:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:48.995 06:59:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:48.995 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:48.995 06:59:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:48.995 06:59:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:48.995 06:59:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:48.995 06:59:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:48.995 06:59:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.995 06:59:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:48.995 06:59:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.995 06:59:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:48.995 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:48.995 06:59:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.995 06:59:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:48.996 06:59:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.996 06:59:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:48.996 06:59:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.996 06:59:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:48.996 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:48.996 06:59:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.996 06:59:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:48.996 06:59:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:48.996 06:59:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:48.996 06:59:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:48.996 06:59:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:48.996 06:59:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:48.996 06:59:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:48.996 06:59:10 -- nvmf/common.sh@57 -- # uname 00:18:48.996 06:59:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:48.996 06:59:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:48.996 06:59:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:48.996 06:59:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:48.996 06:59:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:48.996 06:59:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:48.996 06:59:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:48.996 06:59:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:48.996 06:59:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:48.996 06:59:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:48.996 06:59:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:48.996 06:59:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:48.996 06:59:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:48.996 06:59:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:48.996 06:59:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:48.996 06:59:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:48.996 06:59:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:48.996 06:59:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.996 06:59:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:48.996 06:59:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:48.996 06:59:10 -- nvmf/common.sh@104 -- # continue 2 00:18:48.996 06:59:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:48.996 06:59:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.996 06:59:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:48.996 06:59:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.996 06:59:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:48.996 06:59:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:48.996 06:59:10 -- nvmf/common.sh@104 -- # continue 2 00:18:48.996 06:59:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:48.996 06:59:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:48.996 06:59:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:48.996 06:59:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:48.996 06:59:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:48.996 06:59:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:48.996 06:59:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:48.996 06:59:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:48.996 06:59:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:48.996 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:48.996 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:48.996 altname enp217s0f0np0 00:18:48.996 altname ens818f0np0 00:18:48.996 inet 192.168.100.8/24 scope global mlx_0_0 00:18:48.996 valid_lft forever preferred_lft forever 00:18:48.996 06:59:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:48.996 06:59:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:48.996 06:59:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:48.996 06:59:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:48.996 06:59:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:48.996 06:59:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:48.996 06:59:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:48.996 06:59:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:48.996 06:59:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:48.996 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:48.996 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:48.996 altname enp217s0f1np1 00:18:48.996 altname ens818f1np1 00:18:48.996 inet 192.168.100.9/24 scope global mlx_0_1 00:18:48.996 valid_lft forever preferred_lft forever 00:18:48.996 06:59:10 -- nvmf/common.sh@410 -- # return 0 00:18:48.996 06:59:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:48.996 06:59:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:48.996 06:59:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:48.996 06:59:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:48.996 06:59:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:48.996 06:59:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:48.996 06:59:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:48.996 06:59:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:48.996 06:59:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:48.996 06:59:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:48.996 06:59:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:48.996 06:59:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.996 06:59:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:48.996 06:59:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:48.996 06:59:10 -- nvmf/common.sh@104 -- # continue 2 00:18:48.996 06:59:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:48.996 06:59:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.996 06:59:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:48.996 06:59:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.996 06:59:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:48.996 06:59:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:48.996 06:59:10 -- nvmf/common.sh@104 -- # continue 2 00:18:48.996 06:59:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:48.996 06:59:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:48.996 06:59:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:48.996 06:59:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:48.996 06:59:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:48.996 06:59:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:48.996 06:59:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:48.996 06:59:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:48.996 06:59:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:48.996 06:59:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:48.996 06:59:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:48.996 06:59:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:48.996 06:59:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:48.996 192.168.100.9' 00:18:48.996 06:59:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:48.996 192.168.100.9' 00:18:48.996 06:59:10 -- nvmf/common.sh@445 -- # head -n 1 00:18:48.996 06:59:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:48.996 06:59:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:48.996 192.168.100.9' 00:18:48.996 06:59:10 -- nvmf/common.sh@446 -- # head -n 1 00:18:48.996 06:59:10 -- nvmf/common.sh@446 -- # tail -n +2 00:18:48.996 06:59:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:48.996 06:59:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:48.996 06:59:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:48.996 06:59:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:48.996 06:59:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:48.996 06:59:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:48.996 06:59:10 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:48.996 06:59:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:48.996 06:59:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:48.996 06:59:10 -- common/autotest_common.sh@10 -- # set +x 00:18:48.996 06:59:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:48.996 06:59:10 -- nvmf/common.sh@469 -- # nvmfpid=1359579 00:18:48.996 06:59:10 -- nvmf/common.sh@470 -- # waitforlisten 1359579 00:18:48.996 06:59:10 -- common/autotest_common.sh@829 -- # '[' -z 1359579 ']' 00:18:48.996 06:59:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.996 06:59:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.996 06:59:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.996 06:59:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.996 06:59:10 -- common/autotest_common.sh@10 -- # set +x 00:18:48.996 [2024-12-15 06:59:10.524451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:48.996 [2024-12-15 06:59:10.524505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.996 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.996 [2024-12-15 06:59:10.597290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.256 [2024-12-15 06:59:10.637320] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:49.256 [2024-12-15 06:59:10.637433] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.256 [2024-12-15 06:59:10.637445] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.256 [2024-12-15 06:59:10.637454] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.256 [2024-12-15 06:59:10.637502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.256 [2024-12-15 06:59:10.637603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.256 [2024-12-15 06:59:10.637663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.256 [2024-12-15 06:59:10.637665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.256 06:59:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.256 06:59:10 -- common/autotest_common.sh@862 -- # return 0 00:18:49.256 06:59:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:49.256 06:59:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.256 06:59:10 -- common/autotest_common.sh@10 -- # set +x 00:18:49.256 06:59:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.256 06:59:10 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:49.256 06:59:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.256 06:59:10 -- common/autotest_common.sh@10 -- # set +x 00:18:49.256 06:59:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.256 06:59:10 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:49.256 06:59:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.256 06:59:10 -- common/autotest_common.sh@10 -- # set +x 00:18:49.256 06:59:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.256 06:59:10 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:49.256 06:59:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.256 06:59:10 -- common/autotest_common.sh@10 -- # set +x 00:18:49.256 [2024-12-15 06:59:10.826024] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x167d070/0x1681540) succeed. 00:18:49.256 [2024-12-15 06:59:10.834919] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x167e610/0x16c2be0) succeed. 00:18:49.516 06:59:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.516 06:59:10 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:49.516 06:59:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.516 06:59:10 -- common/autotest_common.sh@10 -- # set +x 00:18:49.516 Malloc0 00:18:49.516 06:59:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.516 06:59:10 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:49.516 06:59:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.516 06:59:10 -- common/autotest_common.sh@10 -- # set +x 00:18:49.516 06:59:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.516 06:59:10 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:49.516 06:59:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.516 06:59:10 -- common/autotest_common.sh@10 -- # set +x 00:18:49.516 06:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.516 06:59:11 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:49.516 06:59:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.516 06:59:11 -- common/autotest_common.sh@10 -- # set +x 00:18:49.516 [2024-12-15 06:59:11.007582] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:49.516 06:59:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.516 06:59:11 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1359610 00:18:49.516 06:59:11 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:49.516 06:59:11 -- nvmf/common.sh@520 -- # config=() 00:18:49.516 06:59:11 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:49.516 06:59:11 -- nvmf/common.sh@520 -- # local subsystem config 00:18:49.516 06:59:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:49.516 06:59:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:49.516 { 00:18:49.516 "params": { 00:18:49.516 "name": "Nvme$subsystem", 00:18:49.516 "trtype": "$TEST_TRANSPORT", 00:18:49.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.516 "adrfam": "ipv4", 00:18:49.516 "trsvcid": "$NVMF_PORT", 00:18:49.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.516 "hdgst": ${hdgst:-false}, 00:18:49.516 "ddgst": ${ddgst:-false} 00:18:49.516 }, 00:18:49.516 "method": "bdev_nvme_attach_controller" 00:18:49.516 } 00:18:49.516 EOF 00:18:49.516 )") 00:18:49.516 06:59:11 -- target/bdev_io_wait.sh@30 -- # READ_PID=1359612 00:18:49.516 06:59:11 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:49.516 06:59:11 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1359615 00:18:49.516 06:59:11 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:49.516 06:59:11 -- nvmf/common.sh@542 -- # cat 00:18:49.516 06:59:11 -- nvmf/common.sh@520 -- # config=() 00:18:49.516 06:59:11 -- nvmf/common.sh@520 -- # local subsystem config 00:18:49.516 06:59:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:49.516 06:59:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:49.516 { 00:18:49.516 "params": { 00:18:49.516 "name": "Nvme$subsystem", 00:18:49.516 "trtype": "$TEST_TRANSPORT", 00:18:49.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.516 "adrfam": "ipv4", 00:18:49.516 "trsvcid": "$NVMF_PORT", 00:18:49.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.516 "hdgst": ${hdgst:-false}, 00:18:49.516 "ddgst": ${ddgst:-false} 00:18:49.516 }, 00:18:49.516 "method": "bdev_nvme_attach_controller" 00:18:49.516 } 00:18:49.516 EOF 00:18:49.516 )") 00:18:49.516 06:59:11 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:49.516 06:59:11 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1359617 00:18:49.517 06:59:11 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:49.517 06:59:11 -- target/bdev_io_wait.sh@35 -- # sync 00:18:49.517 06:59:11 -- nvmf/common.sh@520 -- # config=() 00:18:49.517 06:59:11 -- nvmf/common.sh@520 -- # local subsystem config 00:18:49.517 06:59:11 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:49.517 06:59:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:49.517 06:59:11 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:49.517 06:59:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:49.517 { 00:18:49.517 "params": { 00:18:49.517 "name": "Nvme$subsystem", 00:18:49.517 "trtype": "$TEST_TRANSPORT", 00:18:49.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.517 "adrfam": "ipv4", 00:18:49.517 "trsvcid": "$NVMF_PORT", 00:18:49.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.517 "hdgst": ${hdgst:-false}, 00:18:49.517 "ddgst": ${ddgst:-false} 00:18:49.517 }, 00:18:49.517 "method": "bdev_nvme_attach_controller" 00:18:49.517 } 00:18:49.517 EOF 00:18:49.517 )") 00:18:49.517 06:59:11 -- nvmf/common.sh@520 -- # config=() 00:18:49.517 06:59:11 -- nvmf/common.sh@520 -- # local subsystem config 00:18:49.517 06:59:11 -- nvmf/common.sh@542 -- # cat 00:18:49.517 06:59:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:49.517 06:59:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:49.517 { 00:18:49.517 "params": { 00:18:49.517 "name": "Nvme$subsystem", 00:18:49.517 "trtype": "$TEST_TRANSPORT", 00:18:49.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.517 "adrfam": "ipv4", 00:18:49.517 "trsvcid": "$NVMF_PORT", 00:18:49.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.517 "hdgst": ${hdgst:-false}, 00:18:49.517 "ddgst": ${ddgst:-false} 00:18:49.517 }, 00:18:49.517 "method": "bdev_nvme_attach_controller" 00:18:49.517 } 00:18:49.517 EOF 00:18:49.517 )") 00:18:49.517 06:59:11 -- nvmf/common.sh@542 -- # cat 00:18:49.517 06:59:11 -- nvmf/common.sh@544 -- # jq . 00:18:49.517 06:59:11 -- target/bdev_io_wait.sh@37 -- # wait 1359610 00:18:49.517 06:59:11 -- nvmf/common.sh@542 -- # cat 00:18:49.517 06:59:11 -- nvmf/common.sh@544 -- # jq . 00:18:49.517 06:59:11 -- nvmf/common.sh@545 -- # IFS=, 00:18:49.517 06:59:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:49.517 "params": { 00:18:49.517 "name": "Nvme1", 00:18:49.517 "trtype": "rdma", 00:18:49.517 "traddr": "192.168.100.8", 00:18:49.517 "adrfam": "ipv4", 00:18:49.517 "trsvcid": "4420", 00:18:49.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.517 "hdgst": false, 00:18:49.517 "ddgst": false 00:18:49.517 }, 00:18:49.517 "method": "bdev_nvme_attach_controller" 00:18:49.517 }' 00:18:49.517 06:59:11 -- nvmf/common.sh@544 -- # jq . 00:18:49.517 06:59:11 -- nvmf/common.sh@544 -- # jq . 00:18:49.517 06:59:11 -- nvmf/common.sh@545 -- # IFS=, 00:18:49.517 06:59:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:49.517 "params": { 00:18:49.517 "name": "Nvme1", 00:18:49.517 "trtype": "rdma", 00:18:49.517 "traddr": "192.168.100.8", 00:18:49.517 "adrfam": "ipv4", 00:18:49.517 "trsvcid": "4420", 00:18:49.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.517 "hdgst": false, 00:18:49.517 "ddgst": false 00:18:49.517 }, 00:18:49.517 "method": "bdev_nvme_attach_controller" 00:18:49.517 }' 00:18:49.517 06:59:11 -- nvmf/common.sh@545 -- # IFS=, 00:18:49.517 06:59:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:49.517 "params": { 00:18:49.517 "name": "Nvme1", 00:18:49.517 "trtype": "rdma", 00:18:49.517 "traddr": "192.168.100.8", 00:18:49.517 "adrfam": "ipv4", 00:18:49.517 "trsvcid": "4420", 00:18:49.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.517 "hdgst": false, 00:18:49.517 "ddgst": false 00:18:49.517 }, 00:18:49.517 "method": "bdev_nvme_attach_controller" 00:18:49.517 }' 00:18:49.517 06:59:11 -- nvmf/common.sh@545 -- # IFS=, 00:18:49.517 06:59:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:49.517 "params": { 00:18:49.517 "name": "Nvme1", 00:18:49.517 "trtype": "rdma", 00:18:49.517 "traddr": "192.168.100.8", 00:18:49.517 "adrfam": "ipv4", 00:18:49.517 "trsvcid": "4420", 00:18:49.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.517 "hdgst": false, 00:18:49.517 "ddgst": false 00:18:49.517 }, 00:18:49.517 "method": "bdev_nvme_attach_controller" 00:18:49.517 }' 00:18:49.517 [2024-12-15 06:59:11.054501] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:49.517 [2024-12-15 06:59:11.054556] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:49.517 [2024-12-15 06:59:11.058561] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:49.517 [2024-12-15 06:59:11.058563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:49.517 [2024-12-15 06:59:11.058611] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-15 06:59:11.058611] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:49.517 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:49.517 [2024-12-15 06:59:11.067323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:49.517 [2024-12-15 06:59:11.067382] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:49.517 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.776 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.776 [2024-12-15 06:59:11.246913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.776 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.776 [2024-12-15 06:59:11.275384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:49.776 [2024-12-15 06:59:11.298610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.776 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.776 [2024-12-15 06:59:11.321466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:49.776 [2024-12-15 06:59:11.349382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.776 [2024-12-15 06:59:11.371054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:50.036 [2024-12-15 06:59:11.441072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.036 [2024-12-15 06:59:11.469531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:50.036 Running I/O for 1 seconds... 00:18:50.036 Running I/O for 1 seconds... 00:18:50.036 Running I/O for 1 seconds... 00:18:50.036 Running I/O for 1 seconds... 00:18:50.973 00:18:50.973 Latency(us) 00:18:50.973 [2024-12-15T05:59:12.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.973 [2024-12-15T05:59:12.614Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:50.973 Nvme1n1 : 1.00 264467.97 1033.08 0.00 0.00 482.85 195.79 1900.54 00:18:50.973 [2024-12-15T05:59:12.614Z] =================================================================================================================== 00:18:50.973 [2024-12-15T05:59:12.614Z] Total : 264467.97 1033.08 0.00 0.00 482.85 195.79 1900.54 00:18:50.973 00:18:50.973 Latency(us) 00:18:50.973 [2024-12-15T05:59:12.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.973 [2024-12-15T05:59:12.614Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:50.973 Nvme1n1 : 1.01 18345.59 71.66 0.00 0.00 6955.93 3827.30 14470.35 00:18:50.973 [2024-12-15T05:59:12.614Z] =================================================================================================================== 00:18:50.973 [2024-12-15T05:59:12.614Z] Total : 18345.59 71.66 0.00 0.00 6955.93 3827.30 14470.35 00:18:51.233 00:18:51.233 Latency(us) 00:18:51.233 [2024-12-15T05:59:12.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.233 [2024-12-15T05:59:12.874Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:51.233 Nvme1n1 : 1.00 18290.58 71.45 0.00 0.00 6979.64 4351.59 18140.36 00:18:51.233 [2024-12-15T05:59:12.874Z] =================================================================================================================== 00:18:51.233 [2024-12-15T05:59:12.874Z] Total : 18290.58 71.45 0.00 0.00 6979.64 4351.59 18140.36 00:18:51.233 00:18:51.233 Latency(us) 00:18:51.233 [2024-12-15T05:59:12.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.233 [2024-12-15T05:59:12.874Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:51.233 Nvme1n1 : 1.00 14675.39 57.33 0.00 0.00 8700.01 4639.95 18769.51 00:18:51.233 [2024-12-15T05:59:12.874Z] =================================================================================================================== 00:18:51.233 [2024-12-15T05:59:12.874Z] Total : 14675.39 57.33 0.00 0.00 8700.01 4639.95 18769.51 00:18:51.492 06:59:12 -- target/bdev_io_wait.sh@38 -- # wait 1359612 00:18:51.492 06:59:12 -- target/bdev_io_wait.sh@39 -- # wait 1359615 00:18:51.492 06:59:12 -- target/bdev_io_wait.sh@40 -- # wait 1359617 00:18:51.492 06:59:12 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:51.492 06:59:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.492 06:59:12 -- common/autotest_common.sh@10 -- # set +x 00:18:51.492 06:59:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.492 06:59:12 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:51.492 06:59:12 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:51.492 06:59:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:51.492 06:59:12 -- nvmf/common.sh@116 -- # sync 00:18:51.492 06:59:12 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:51.492 06:59:12 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:51.493 06:59:12 -- nvmf/common.sh@119 -- # set +e 00:18:51.493 06:59:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:51.493 06:59:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:51.493 rmmod nvme_rdma 00:18:51.493 rmmod nvme_fabrics 00:18:51.493 06:59:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:51.493 06:59:12 -- nvmf/common.sh@123 -- # set -e 00:18:51.493 06:59:13 -- nvmf/common.sh@124 -- # return 0 00:18:51.493 06:59:13 -- nvmf/common.sh@477 -- # '[' -n 1359579 ']' 00:18:51.493 06:59:13 -- nvmf/common.sh@478 -- # killprocess 1359579 00:18:51.493 06:59:13 -- common/autotest_common.sh@936 -- # '[' -z 1359579 ']' 00:18:51.493 06:59:13 -- common/autotest_common.sh@940 -- # kill -0 1359579 00:18:51.493 06:59:13 -- common/autotest_common.sh@941 -- # uname 00:18:51.493 06:59:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:51.493 06:59:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1359579 00:18:51.493 06:59:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:51.493 06:59:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:51.493 06:59:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1359579' 00:18:51.493 killing process with pid 1359579 00:18:51.493 06:59:13 -- common/autotest_common.sh@955 -- # kill 1359579 00:18:51.493 06:59:13 -- common/autotest_common.sh@960 -- # wait 1359579 00:18:51.752 06:59:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:51.752 06:59:13 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:51.752 00:18:51.752 real 0m9.601s 00:18:51.752 user 0m17.772s 00:18:51.752 sys 0m6.333s 00:18:51.752 06:59:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:51.752 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:18:51.752 ************************************ 00:18:51.752 END TEST nvmf_bdev_io_wait 00:18:51.752 ************************************ 00:18:51.752 06:59:13 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:51.752 06:59:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:51.752 06:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:51.752 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:18:51.752 ************************************ 00:18:51.752 START TEST nvmf_queue_depth 00:18:51.752 ************************************ 00:18:51.752 06:59:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:52.012 * Looking for test storage... 00:18:52.012 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:52.012 06:59:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:52.012 06:59:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:52.012 06:59:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:52.012 06:59:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:52.012 06:59:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:52.012 06:59:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:52.012 06:59:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:52.012 06:59:13 -- scripts/common.sh@335 -- # IFS=.-: 00:18:52.012 06:59:13 -- scripts/common.sh@335 -- # read -ra ver1 00:18:52.012 06:59:13 -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.012 06:59:13 -- scripts/common.sh@336 -- # read -ra ver2 00:18:52.012 06:59:13 -- scripts/common.sh@337 -- # local 'op=<' 00:18:52.012 06:59:13 -- scripts/common.sh@339 -- # ver1_l=2 00:18:52.012 06:59:13 -- scripts/common.sh@340 -- # ver2_l=1 00:18:52.012 06:59:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:52.012 06:59:13 -- scripts/common.sh@343 -- # case "$op" in 00:18:52.012 06:59:13 -- scripts/common.sh@344 -- # : 1 00:18:52.012 06:59:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:52.012 06:59:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.012 06:59:13 -- scripts/common.sh@364 -- # decimal 1 00:18:52.012 06:59:13 -- scripts/common.sh@352 -- # local d=1 00:18:52.012 06:59:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.012 06:59:13 -- scripts/common.sh@354 -- # echo 1 00:18:52.012 06:59:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:52.012 06:59:13 -- scripts/common.sh@365 -- # decimal 2 00:18:52.012 06:59:13 -- scripts/common.sh@352 -- # local d=2 00:18:52.012 06:59:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.012 06:59:13 -- scripts/common.sh@354 -- # echo 2 00:18:52.012 06:59:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:52.012 06:59:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:52.012 06:59:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:52.012 06:59:13 -- scripts/common.sh@367 -- # return 0 00:18:52.013 06:59:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.013 06:59:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:52.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.013 --rc genhtml_branch_coverage=1 00:18:52.013 --rc genhtml_function_coverage=1 00:18:52.013 --rc genhtml_legend=1 00:18:52.013 --rc geninfo_all_blocks=1 00:18:52.013 --rc geninfo_unexecuted_blocks=1 00:18:52.013 00:18:52.013 ' 00:18:52.013 06:59:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:52.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.013 --rc genhtml_branch_coverage=1 00:18:52.013 --rc genhtml_function_coverage=1 00:18:52.013 --rc genhtml_legend=1 00:18:52.013 --rc geninfo_all_blocks=1 00:18:52.013 --rc geninfo_unexecuted_blocks=1 00:18:52.013 00:18:52.013 ' 00:18:52.013 06:59:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:52.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.013 --rc genhtml_branch_coverage=1 00:18:52.013 --rc genhtml_function_coverage=1 00:18:52.013 --rc genhtml_legend=1 00:18:52.013 --rc geninfo_all_blocks=1 00:18:52.013 --rc geninfo_unexecuted_blocks=1 00:18:52.013 00:18:52.013 ' 00:18:52.013 06:59:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:52.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.013 --rc genhtml_branch_coverage=1 00:18:52.013 --rc genhtml_function_coverage=1 00:18:52.013 --rc genhtml_legend=1 00:18:52.013 --rc geninfo_all_blocks=1 00:18:52.013 --rc geninfo_unexecuted_blocks=1 00:18:52.013 00:18:52.013 ' 00:18:52.013 06:59:13 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.013 06:59:13 -- nvmf/common.sh@7 -- # uname -s 00:18:52.013 06:59:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.013 06:59:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.013 06:59:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.013 06:59:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.013 06:59:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.013 06:59:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.013 06:59:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.013 06:59:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.013 06:59:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.013 06:59:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.013 06:59:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:52.013 06:59:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:52.013 06:59:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.013 06:59:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.013 06:59:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.013 06:59:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:52.013 06:59:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.013 06:59:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.013 06:59:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.013 06:59:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.013 06:59:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.013 06:59:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.013 06:59:13 -- paths/export.sh@5 -- # export PATH 00:18:52.013 06:59:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.013 06:59:13 -- nvmf/common.sh@46 -- # : 0 00:18:52.013 06:59:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:52.013 06:59:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:52.013 06:59:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:52.013 06:59:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.013 06:59:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.013 06:59:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:52.013 06:59:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:52.013 06:59:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:52.013 06:59:13 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:52.013 06:59:13 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:52.013 06:59:13 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:52.013 06:59:13 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:52.013 06:59:13 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:52.013 06:59:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.013 06:59:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:52.013 06:59:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:52.013 06:59:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:52.013 06:59:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.013 06:59:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.013 06:59:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.013 06:59:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:52.013 06:59:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:52.013 06:59:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:52.013 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:18:58.660 06:59:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:58.660 06:59:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:58.660 06:59:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:58.660 06:59:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:58.660 06:59:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:58.660 06:59:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:58.660 06:59:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:58.660 06:59:19 -- nvmf/common.sh@294 -- # net_devs=() 00:18:58.660 06:59:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:58.660 06:59:19 -- nvmf/common.sh@295 -- # e810=() 00:18:58.660 06:59:19 -- nvmf/common.sh@295 -- # local -ga e810 00:18:58.660 06:59:19 -- nvmf/common.sh@296 -- # x722=() 00:18:58.660 06:59:19 -- nvmf/common.sh@296 -- # local -ga x722 00:18:58.660 06:59:19 -- nvmf/common.sh@297 -- # mlx=() 00:18:58.660 06:59:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:58.660 06:59:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.660 06:59:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.660 06:59:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.660 06:59:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.660 06:59:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.660 06:59:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.660 06:59:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.660 06:59:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.660 06:59:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.660 06:59:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.660 06:59:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.660 06:59:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:58.660 06:59:19 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:58.660 06:59:19 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:58.660 06:59:19 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:58.660 06:59:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:58.660 06:59:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:58.660 06:59:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:58.660 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:58.660 06:59:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:58.660 06:59:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:58.660 06:59:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:58.660 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:58.660 06:59:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:58.660 06:59:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:58.660 06:59:19 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:58.660 06:59:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.660 06:59:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:58.660 06:59:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.660 06:59:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:58.660 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:58.660 06:59:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.660 06:59:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:58.660 06:59:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.660 06:59:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:58.660 06:59:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.660 06:59:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:58.660 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:58.660 06:59:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.660 06:59:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:58.660 06:59:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:58.660 06:59:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:58.660 06:59:19 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:58.660 06:59:19 -- nvmf/common.sh@57 -- # uname 00:18:58.660 06:59:19 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:58.660 06:59:19 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:58.660 06:59:19 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:58.660 06:59:19 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:58.660 06:59:19 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:58.660 06:59:19 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:58.660 06:59:19 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:58.660 06:59:19 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:58.660 06:59:19 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:58.660 06:59:19 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:58.660 06:59:19 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:58.660 06:59:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:58.660 06:59:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:58.660 06:59:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:58.660 06:59:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:58.660 06:59:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:58.660 06:59:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:58.660 06:59:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:58.660 06:59:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:58.660 06:59:19 -- nvmf/common.sh@104 -- # continue 2 00:18:58.660 06:59:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:58.660 06:59:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:58.660 06:59:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:58.660 06:59:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:58.660 06:59:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:58.661 06:59:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:58.661 06:59:19 -- nvmf/common.sh@104 -- # continue 2 00:18:58.661 06:59:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:58.661 06:59:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:58.661 06:59:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:58.661 06:59:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:58.661 06:59:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:58.661 06:59:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:58.661 06:59:19 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:58.661 06:59:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:58.661 06:59:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:58.661 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:58.661 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:58.661 altname enp217s0f0np0 00:18:58.661 altname ens818f0np0 00:18:58.661 inet 192.168.100.8/24 scope global mlx_0_0 00:18:58.661 valid_lft forever preferred_lft forever 00:18:58.661 06:59:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:58.661 06:59:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:58.661 06:59:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:58.661 06:59:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:58.661 06:59:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:58.661 06:59:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:58.661 06:59:19 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:58.661 06:59:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:58.661 06:59:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:58.661 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:58.661 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:58.661 altname enp217s0f1np1 00:18:58.661 altname ens818f1np1 00:18:58.661 inet 192.168.100.9/24 scope global mlx_0_1 00:18:58.661 valid_lft forever preferred_lft forever 00:18:58.661 06:59:19 -- nvmf/common.sh@410 -- # return 0 00:18:58.661 06:59:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:58.661 06:59:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:58.661 06:59:19 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:58.661 06:59:19 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:58.661 06:59:19 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:58.661 06:59:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:58.661 06:59:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:58.661 06:59:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:58.661 06:59:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:58.661 06:59:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:58.661 06:59:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:58.661 06:59:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:58.661 06:59:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:58.661 06:59:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:58.661 06:59:19 -- nvmf/common.sh@104 -- # continue 2 00:18:58.661 06:59:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:58.661 06:59:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:58.661 06:59:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:58.661 06:59:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:58.661 06:59:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:58.661 06:59:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:58.661 06:59:19 -- nvmf/common.sh@104 -- # continue 2 00:18:58.661 06:59:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:58.661 06:59:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:58.661 06:59:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:58.661 06:59:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:58.661 06:59:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:58.661 06:59:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:58.661 06:59:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:58.661 06:59:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:58.661 06:59:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:58.661 06:59:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:58.661 06:59:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:58.661 06:59:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:58.661 06:59:19 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:58.661 192.168.100.9' 00:18:58.661 06:59:19 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:58.661 192.168.100.9' 00:18:58.661 06:59:19 -- nvmf/common.sh@445 -- # head -n 1 00:18:58.661 06:59:19 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:58.661 06:59:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:58.661 192.168.100.9' 00:18:58.661 06:59:19 -- nvmf/common.sh@446 -- # tail -n +2 00:18:58.661 06:59:19 -- nvmf/common.sh@446 -- # head -n 1 00:18:58.661 06:59:19 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:58.661 06:59:19 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:58.661 06:59:19 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:58.661 06:59:19 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:58.661 06:59:19 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:58.661 06:59:19 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:58.661 06:59:20 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:58.661 06:59:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:58.661 06:59:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:58.661 06:59:20 -- common/autotest_common.sh@10 -- # set +x 00:18:58.661 06:59:20 -- nvmf/common.sh@469 -- # nvmfpid=1363346 00:18:58.661 06:59:20 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:58.661 06:59:20 -- nvmf/common.sh@470 -- # waitforlisten 1363346 00:18:58.661 06:59:20 -- common/autotest_common.sh@829 -- # '[' -z 1363346 ']' 00:18:58.661 06:59:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.661 06:59:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.661 06:59:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.661 06:59:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.661 06:59:20 -- common/autotest_common.sh@10 -- # set +x 00:18:58.661 [2024-12-15 06:59:20.056366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:58.661 [2024-12-15 06:59:20.056418] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.661 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.661 [2024-12-15 06:59:20.128315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.661 [2024-12-15 06:59:20.165080] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:58.661 [2024-12-15 06:59:20.165206] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.661 [2024-12-15 06:59:20.165216] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.661 [2024-12-15 06:59:20.165225] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.661 [2024-12-15 06:59:20.165245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.229 06:59:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.229 06:59:20 -- common/autotest_common.sh@862 -- # return 0 00:18:59.229 06:59:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:59.229 06:59:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:59.229 06:59:20 -- common/autotest_common.sh@10 -- # set +x 00:18:59.489 06:59:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.489 06:59:20 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:59.489 06:59:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.489 06:59:20 -- common/autotest_common.sh@10 -- # set +x 00:18:59.489 [2024-12-15 06:59:20.934823] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf38550/0xf3ca00) succeed. 00:18:59.489 [2024-12-15 06:59:20.944045] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf39a00/0xf7e0a0) succeed. 00:18:59.489 06:59:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.489 06:59:20 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:59.489 06:59:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.489 06:59:20 -- common/autotest_common.sh@10 -- # set +x 00:18:59.489 Malloc0 00:18:59.489 06:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.489 06:59:21 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:59.489 06:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.489 06:59:21 -- common/autotest_common.sh@10 -- # set +x 00:18:59.489 06:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.489 06:59:21 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:59.489 06:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.489 06:59:21 -- common/autotest_common.sh@10 -- # set +x 00:18:59.489 06:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.489 06:59:21 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:59.489 06:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.489 06:59:21 -- common/autotest_common.sh@10 -- # set +x 00:18:59.489 [2024-12-15 06:59:21.027225] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:59.489 06:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.489 06:59:21 -- target/queue_depth.sh@30 -- # bdevperf_pid=1363473 00:18:59.489 06:59:21 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:59.489 06:59:21 -- target/queue_depth.sh@33 -- # waitforlisten 1363473 /var/tmp/bdevperf.sock 00:18:59.489 06:59:21 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:59.489 06:59:21 -- common/autotest_common.sh@829 -- # '[' -z 1363473 ']' 00:18:59.489 06:59:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.489 06:59:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.489 06:59:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.489 06:59:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.489 06:59:21 -- common/autotest_common.sh@10 -- # set +x 00:18:59.489 [2024-12-15 06:59:21.059613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:59.489 [2024-12-15 06:59:21.059658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1363473 ] 00:18:59.489 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.748 [2024-12-15 06:59:21.130321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.748 [2024-12-15 06:59:21.168053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.317 06:59:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.317 06:59:21 -- common/autotest_common.sh@862 -- # return 0 00:19:00.317 06:59:21 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:00.317 06:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.317 06:59:21 -- common/autotest_common.sh@10 -- # set +x 00:19:00.576 NVMe0n1 00:19:00.576 06:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.576 06:59:21 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:00.576 Running I/O for 10 seconds... 00:19:10.558 00:19:10.558 Latency(us) 00:19:10.558 [2024-12-15T05:59:32.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.558 [2024-12-15T05:59:32.199Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:10.558 Verification LBA range: start 0x0 length 0x4000 00:19:10.558 NVMe0n1 : 10.03 29492.43 115.20 0.00 0.00 34642.55 7864.32 31037.85 00:19:10.558 [2024-12-15T05:59:32.199Z] =================================================================================================================== 00:19:10.558 [2024-12-15T05:59:32.199Z] Total : 29492.43 115.20 0.00 0.00 34642.55 7864.32 31037.85 00:19:10.558 0 00:19:10.558 06:59:32 -- target/queue_depth.sh@39 -- # killprocess 1363473 00:19:10.558 06:59:32 -- common/autotest_common.sh@936 -- # '[' -z 1363473 ']' 00:19:10.558 06:59:32 -- common/autotest_common.sh@940 -- # kill -0 1363473 00:19:10.558 06:59:32 -- common/autotest_common.sh@941 -- # uname 00:19:10.558 06:59:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:10.558 06:59:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1363473 00:19:10.558 06:59:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:10.558 06:59:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:10.558 06:59:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1363473' 00:19:10.558 killing process with pid 1363473 00:19:10.558 06:59:32 -- common/autotest_common.sh@955 -- # kill 1363473 00:19:10.558 Received shutdown signal, test time was about 10.000000 seconds 00:19:10.558 00:19:10.558 Latency(us) 00:19:10.558 [2024-12-15T05:59:32.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.558 [2024-12-15T05:59:32.199Z] =================================================================================================================== 00:19:10.558 [2024-12-15T05:59:32.199Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:10.558 06:59:32 -- common/autotest_common.sh@960 -- # wait 1363473 00:19:10.817 06:59:32 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:10.817 06:59:32 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:10.817 06:59:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:10.817 06:59:32 -- nvmf/common.sh@116 -- # sync 00:19:10.817 06:59:32 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:10.817 06:59:32 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:10.817 06:59:32 -- nvmf/common.sh@119 -- # set +e 00:19:10.817 06:59:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:10.817 06:59:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:10.817 rmmod nvme_rdma 00:19:10.817 rmmod nvme_fabrics 00:19:10.817 06:59:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:10.817 06:59:32 -- nvmf/common.sh@123 -- # set -e 00:19:10.817 06:59:32 -- nvmf/common.sh@124 -- # return 0 00:19:10.817 06:59:32 -- nvmf/common.sh@477 -- # '[' -n 1363346 ']' 00:19:10.817 06:59:32 -- nvmf/common.sh@478 -- # killprocess 1363346 00:19:10.817 06:59:32 -- common/autotest_common.sh@936 -- # '[' -z 1363346 ']' 00:19:10.817 06:59:32 -- common/autotest_common.sh@940 -- # kill -0 1363346 00:19:10.817 06:59:32 -- common/autotest_common.sh@941 -- # uname 00:19:10.817 06:59:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:10.817 06:59:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1363346 00:19:11.077 06:59:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:11.077 06:59:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:11.077 06:59:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1363346' 00:19:11.077 killing process with pid 1363346 00:19:11.077 06:59:32 -- common/autotest_common.sh@955 -- # kill 1363346 00:19:11.077 06:59:32 -- common/autotest_common.sh@960 -- # wait 1363346 00:19:11.077 06:59:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:11.077 06:59:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:11.077 00:19:11.077 real 0m19.334s 00:19:11.077 user 0m25.981s 00:19:11.077 sys 0m5.615s 00:19:11.077 06:59:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:11.077 06:59:32 -- common/autotest_common.sh@10 -- # set +x 00:19:11.077 ************************************ 00:19:11.077 END TEST nvmf_queue_depth 00:19:11.077 ************************************ 00:19:11.337 06:59:32 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:11.337 06:59:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:11.337 06:59:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.337 06:59:32 -- common/autotest_common.sh@10 -- # set +x 00:19:11.337 ************************************ 00:19:11.337 START TEST nvmf_multipath 00:19:11.337 ************************************ 00:19:11.337 06:59:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:11.337 * Looking for test storage... 00:19:11.337 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:11.337 06:59:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:11.337 06:59:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:11.337 06:59:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:11.337 06:59:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:11.337 06:59:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:11.337 06:59:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:11.337 06:59:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:11.337 06:59:32 -- scripts/common.sh@335 -- # IFS=.-: 00:19:11.337 06:59:32 -- scripts/common.sh@335 -- # read -ra ver1 00:19:11.337 06:59:32 -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.337 06:59:32 -- scripts/common.sh@336 -- # read -ra ver2 00:19:11.337 06:59:32 -- scripts/common.sh@337 -- # local 'op=<' 00:19:11.337 06:59:32 -- scripts/common.sh@339 -- # ver1_l=2 00:19:11.337 06:59:32 -- scripts/common.sh@340 -- # ver2_l=1 00:19:11.337 06:59:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:11.337 06:59:32 -- scripts/common.sh@343 -- # case "$op" in 00:19:11.337 06:59:32 -- scripts/common.sh@344 -- # : 1 00:19:11.337 06:59:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:11.337 06:59:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.337 06:59:32 -- scripts/common.sh@364 -- # decimal 1 00:19:11.337 06:59:32 -- scripts/common.sh@352 -- # local d=1 00:19:11.337 06:59:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.337 06:59:32 -- scripts/common.sh@354 -- # echo 1 00:19:11.337 06:59:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:11.337 06:59:32 -- scripts/common.sh@365 -- # decimal 2 00:19:11.337 06:59:32 -- scripts/common.sh@352 -- # local d=2 00:19:11.337 06:59:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.337 06:59:32 -- scripts/common.sh@354 -- # echo 2 00:19:11.337 06:59:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:11.337 06:59:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:11.337 06:59:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:11.337 06:59:32 -- scripts/common.sh@367 -- # return 0 00:19:11.337 06:59:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.337 06:59:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:11.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.337 --rc genhtml_branch_coverage=1 00:19:11.337 --rc genhtml_function_coverage=1 00:19:11.337 --rc genhtml_legend=1 00:19:11.337 --rc geninfo_all_blocks=1 00:19:11.337 --rc geninfo_unexecuted_blocks=1 00:19:11.337 00:19:11.337 ' 00:19:11.337 06:59:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:11.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.337 --rc genhtml_branch_coverage=1 00:19:11.337 --rc genhtml_function_coverage=1 00:19:11.337 --rc genhtml_legend=1 00:19:11.337 --rc geninfo_all_blocks=1 00:19:11.337 --rc geninfo_unexecuted_blocks=1 00:19:11.337 00:19:11.337 ' 00:19:11.337 06:59:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:11.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.337 --rc genhtml_branch_coverage=1 00:19:11.337 --rc genhtml_function_coverage=1 00:19:11.337 --rc genhtml_legend=1 00:19:11.337 --rc geninfo_all_blocks=1 00:19:11.337 --rc geninfo_unexecuted_blocks=1 00:19:11.337 00:19:11.337 ' 00:19:11.337 06:59:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:11.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.337 --rc genhtml_branch_coverage=1 00:19:11.337 --rc genhtml_function_coverage=1 00:19:11.337 --rc genhtml_legend=1 00:19:11.337 --rc geninfo_all_blocks=1 00:19:11.337 --rc geninfo_unexecuted_blocks=1 00:19:11.337 00:19:11.337 ' 00:19:11.337 06:59:32 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.337 06:59:32 -- nvmf/common.sh@7 -- # uname -s 00:19:11.337 06:59:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.337 06:59:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.337 06:59:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.337 06:59:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.337 06:59:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.337 06:59:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.337 06:59:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.337 06:59:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.337 06:59:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.337 06:59:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.337 06:59:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:11.337 06:59:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:11.337 06:59:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.337 06:59:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.337 06:59:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.337 06:59:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:11.337 06:59:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.337 06:59:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.337 06:59:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.337 06:59:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.337 06:59:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.337 06:59:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.337 06:59:32 -- paths/export.sh@5 -- # export PATH 00:19:11.337 06:59:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.337 06:59:32 -- nvmf/common.sh@46 -- # : 0 00:19:11.337 06:59:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:11.337 06:59:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:11.337 06:59:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:11.337 06:59:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.337 06:59:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.337 06:59:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:11.337 06:59:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:11.337 06:59:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:11.337 06:59:32 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:11.337 06:59:32 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:11.337 06:59:32 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:11.337 06:59:32 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:11.337 06:59:32 -- target/multipath.sh@43 -- # nvmftestinit 00:19:11.337 06:59:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:11.337 06:59:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.337 06:59:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:11.337 06:59:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:11.337 06:59:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:11.337 06:59:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.337 06:59:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.337 06:59:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.596 06:59:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:11.596 06:59:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:11.596 06:59:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:11.596 06:59:32 -- common/autotest_common.sh@10 -- # set +x 00:19:18.169 06:59:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:18.169 06:59:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:18.169 06:59:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:18.169 06:59:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:18.169 06:59:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:18.169 06:59:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:18.169 06:59:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:18.169 06:59:39 -- nvmf/common.sh@294 -- # net_devs=() 00:19:18.169 06:59:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:18.169 06:59:39 -- nvmf/common.sh@295 -- # e810=() 00:19:18.169 06:59:39 -- nvmf/common.sh@295 -- # local -ga e810 00:19:18.169 06:59:39 -- nvmf/common.sh@296 -- # x722=() 00:19:18.169 06:59:39 -- nvmf/common.sh@296 -- # local -ga x722 00:19:18.169 06:59:39 -- nvmf/common.sh@297 -- # mlx=() 00:19:18.169 06:59:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:18.169 06:59:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.169 06:59:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.169 06:59:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.169 06:59:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.169 06:59:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.169 06:59:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.169 06:59:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.169 06:59:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.169 06:59:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.169 06:59:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.169 06:59:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.169 06:59:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:18.169 06:59:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:18.169 06:59:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:18.169 06:59:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:18.169 06:59:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:18.169 06:59:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:18.169 06:59:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:18.169 06:59:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:18.169 06:59:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:18.169 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:18.169 06:59:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:18.169 06:59:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:18.169 06:59:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:18.169 06:59:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:18.169 06:59:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:18.169 06:59:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:18.169 06:59:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:18.170 06:59:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:18.170 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:18.170 06:59:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:18.170 06:59:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:18.170 06:59:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:18.170 06:59:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.170 06:59:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:18.170 06:59:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.170 06:59:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:18.170 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:18.170 06:59:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.170 06:59:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:18.170 06:59:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.170 06:59:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:18.170 06:59:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.170 06:59:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:18.170 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:18.170 06:59:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.170 06:59:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:18.170 06:59:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:18.170 06:59:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:18.170 06:59:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:18.170 06:59:39 -- nvmf/common.sh@57 -- # uname 00:19:18.170 06:59:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:18.170 06:59:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:18.170 06:59:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:18.170 06:59:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:18.170 06:59:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:18.170 06:59:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:18.170 06:59:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:18.170 06:59:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:18.170 06:59:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:18.170 06:59:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:18.170 06:59:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:18.170 06:59:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:18.170 06:59:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:18.170 06:59:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:18.170 06:59:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:18.170 06:59:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:18.170 06:59:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:18.170 06:59:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.170 06:59:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:18.170 06:59:39 -- nvmf/common.sh@104 -- # continue 2 00:19:18.170 06:59:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:18.170 06:59:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.170 06:59:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.170 06:59:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:18.170 06:59:39 -- nvmf/common.sh@104 -- # continue 2 00:19:18.170 06:59:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:18.170 06:59:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:18.170 06:59:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:18.170 06:59:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:18.170 06:59:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:18.170 06:59:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:18.170 06:59:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:18.170 06:59:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:18.170 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:18.170 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:18.170 altname enp217s0f0np0 00:19:18.170 altname ens818f0np0 00:19:18.170 inet 192.168.100.8/24 scope global mlx_0_0 00:19:18.170 valid_lft forever preferred_lft forever 00:19:18.170 06:59:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:18.170 06:59:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:18.170 06:59:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:18.170 06:59:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:18.170 06:59:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:18.170 06:59:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:18.170 06:59:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:18.170 06:59:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:18.170 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:18.170 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:18.170 altname enp217s0f1np1 00:19:18.170 altname ens818f1np1 00:19:18.170 inet 192.168.100.9/24 scope global mlx_0_1 00:19:18.170 valid_lft forever preferred_lft forever 00:19:18.170 06:59:39 -- nvmf/common.sh@410 -- # return 0 00:19:18.170 06:59:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:18.170 06:59:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:18.170 06:59:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:18.170 06:59:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:18.170 06:59:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:18.170 06:59:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:18.170 06:59:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:18.170 06:59:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:18.170 06:59:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:18.170 06:59:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:18.170 06:59:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.170 06:59:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:18.170 06:59:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:18.170 06:59:39 -- nvmf/common.sh@104 -- # continue 2 00:19:18.170 06:59:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:18.170 06:59:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.431 06:59:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:18.431 06:59:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:18.431 06:59:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:18.431 06:59:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:18.431 06:59:39 -- nvmf/common.sh@104 -- # continue 2 00:19:18.431 06:59:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:18.431 06:59:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:18.431 06:59:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:18.431 06:59:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:18.431 06:59:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:18.431 06:59:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:18.431 06:59:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:18.431 06:59:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:18.431 06:59:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:18.431 06:59:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:18.431 06:59:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:18.431 06:59:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:18.431 06:59:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:18.431 192.168.100.9' 00:19:18.431 06:59:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:18.431 192.168.100.9' 00:19:18.431 06:59:39 -- nvmf/common.sh@445 -- # head -n 1 00:19:18.431 06:59:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:18.431 06:59:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:18.431 192.168.100.9' 00:19:18.431 06:59:39 -- nvmf/common.sh@446 -- # tail -n +2 00:19:18.431 06:59:39 -- nvmf/common.sh@446 -- # head -n 1 00:19:18.431 06:59:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:18.431 06:59:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:18.431 06:59:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:18.431 06:59:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:18.431 06:59:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:18.431 06:59:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:18.431 06:59:39 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:19:18.431 06:59:39 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:19:18.431 06:59:39 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:19:18.431 run this test only with TCP transport for now 00:19:18.431 06:59:39 -- target/multipath.sh@53 -- # nvmftestfini 00:19:18.431 06:59:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:18.431 06:59:39 -- nvmf/common.sh@116 -- # sync 00:19:18.431 06:59:39 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:18.431 06:59:39 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:18.431 06:59:39 -- nvmf/common.sh@119 -- # set +e 00:19:18.431 06:59:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:18.431 06:59:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:18.431 rmmod nvme_rdma 00:19:18.431 rmmod nvme_fabrics 00:19:18.431 06:59:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:18.431 06:59:39 -- nvmf/common.sh@123 -- # set -e 00:19:18.431 06:59:39 -- nvmf/common.sh@124 -- # return 0 00:19:18.431 06:59:39 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:18.431 06:59:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:18.431 06:59:39 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:18.431 06:59:39 -- target/multipath.sh@54 -- # exit 0 00:19:18.431 06:59:39 -- target/multipath.sh@1 -- # nvmftestfini 00:19:18.431 06:59:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:18.431 06:59:39 -- nvmf/common.sh@116 -- # sync 00:19:18.431 06:59:39 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:18.431 06:59:39 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:18.431 06:59:39 -- nvmf/common.sh@119 -- # set +e 00:19:18.431 06:59:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:18.431 06:59:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:18.431 06:59:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:18.431 06:59:39 -- nvmf/common.sh@123 -- # set -e 00:19:18.431 06:59:39 -- nvmf/common.sh@124 -- # return 0 00:19:18.431 06:59:39 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:18.431 06:59:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:18.431 06:59:39 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:18.431 00:19:18.431 real 0m7.210s 00:19:18.431 user 0m2.077s 00:19:18.431 sys 0m5.330s 00:19:18.431 06:59:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:18.431 06:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:18.431 ************************************ 00:19:18.431 END TEST nvmf_multipath 00:19:18.431 ************************************ 00:19:18.431 06:59:40 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:18.431 06:59:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:18.431 06:59:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:18.431 06:59:40 -- common/autotest_common.sh@10 -- # set +x 00:19:18.431 ************************************ 00:19:18.431 START TEST nvmf_zcopy 00:19:18.431 ************************************ 00:19:18.431 06:59:40 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:18.691 * Looking for test storage... 00:19:18.691 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:18.691 06:59:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:18.691 06:59:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:18.691 06:59:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:18.691 06:59:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:18.691 06:59:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:18.691 06:59:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:18.691 06:59:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:18.691 06:59:40 -- scripts/common.sh@335 -- # IFS=.-: 00:19:18.691 06:59:40 -- scripts/common.sh@335 -- # read -ra ver1 00:19:18.691 06:59:40 -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.691 06:59:40 -- scripts/common.sh@336 -- # read -ra ver2 00:19:18.691 06:59:40 -- scripts/common.sh@337 -- # local 'op=<' 00:19:18.691 06:59:40 -- scripts/common.sh@339 -- # ver1_l=2 00:19:18.691 06:59:40 -- scripts/common.sh@340 -- # ver2_l=1 00:19:18.691 06:59:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:18.691 06:59:40 -- scripts/common.sh@343 -- # case "$op" in 00:19:18.691 06:59:40 -- scripts/common.sh@344 -- # : 1 00:19:18.691 06:59:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:18.691 06:59:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.691 06:59:40 -- scripts/common.sh@364 -- # decimal 1 00:19:18.691 06:59:40 -- scripts/common.sh@352 -- # local d=1 00:19:18.691 06:59:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.692 06:59:40 -- scripts/common.sh@354 -- # echo 1 00:19:18.692 06:59:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:18.692 06:59:40 -- scripts/common.sh@365 -- # decimal 2 00:19:18.692 06:59:40 -- scripts/common.sh@352 -- # local d=2 00:19:18.692 06:59:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.692 06:59:40 -- scripts/common.sh@354 -- # echo 2 00:19:18.692 06:59:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:18.692 06:59:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:18.692 06:59:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:18.692 06:59:40 -- scripts/common.sh@367 -- # return 0 00:19:18.692 06:59:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.692 06:59:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:18.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.692 --rc genhtml_branch_coverage=1 00:19:18.692 --rc genhtml_function_coverage=1 00:19:18.692 --rc genhtml_legend=1 00:19:18.692 --rc geninfo_all_blocks=1 00:19:18.692 --rc geninfo_unexecuted_blocks=1 00:19:18.692 00:19:18.692 ' 00:19:18.692 06:59:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:18.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.692 --rc genhtml_branch_coverage=1 00:19:18.692 --rc genhtml_function_coverage=1 00:19:18.692 --rc genhtml_legend=1 00:19:18.692 --rc geninfo_all_blocks=1 00:19:18.692 --rc geninfo_unexecuted_blocks=1 00:19:18.692 00:19:18.692 ' 00:19:18.692 06:59:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:18.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.692 --rc genhtml_branch_coverage=1 00:19:18.692 --rc genhtml_function_coverage=1 00:19:18.692 --rc genhtml_legend=1 00:19:18.692 --rc geninfo_all_blocks=1 00:19:18.692 --rc geninfo_unexecuted_blocks=1 00:19:18.692 00:19:18.692 ' 00:19:18.692 06:59:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:18.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.692 --rc genhtml_branch_coverage=1 00:19:18.692 --rc genhtml_function_coverage=1 00:19:18.692 --rc genhtml_legend=1 00:19:18.692 --rc geninfo_all_blocks=1 00:19:18.692 --rc geninfo_unexecuted_blocks=1 00:19:18.692 00:19:18.692 ' 00:19:18.692 06:59:40 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.692 06:59:40 -- nvmf/common.sh@7 -- # uname -s 00:19:18.692 06:59:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.692 06:59:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.692 06:59:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.692 06:59:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.692 06:59:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.692 06:59:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.692 06:59:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.692 06:59:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.692 06:59:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.692 06:59:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.692 06:59:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:18.692 06:59:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:18.692 06:59:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.692 06:59:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.692 06:59:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.692 06:59:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:18.692 06:59:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.692 06:59:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.692 06:59:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.692 06:59:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.692 06:59:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.692 06:59:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.692 06:59:40 -- paths/export.sh@5 -- # export PATH 00:19:18.692 06:59:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.692 06:59:40 -- nvmf/common.sh@46 -- # : 0 00:19:18.692 06:59:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:18.692 06:59:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:18.692 06:59:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:18.692 06:59:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.692 06:59:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.692 06:59:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:18.692 06:59:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:18.692 06:59:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:18.692 06:59:40 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:18.692 06:59:40 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:18.692 06:59:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.692 06:59:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:18.692 06:59:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:18.692 06:59:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:18.692 06:59:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.692 06:59:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.692 06:59:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.692 06:59:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:18.692 06:59:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:18.692 06:59:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:18.692 06:59:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.261 06:59:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:25.261 06:59:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:25.261 06:59:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:25.261 06:59:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:25.261 06:59:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:25.261 06:59:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:25.261 06:59:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:25.261 06:59:46 -- nvmf/common.sh@294 -- # net_devs=() 00:19:25.261 06:59:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:25.261 06:59:46 -- nvmf/common.sh@295 -- # e810=() 00:19:25.261 06:59:46 -- nvmf/common.sh@295 -- # local -ga e810 00:19:25.261 06:59:46 -- nvmf/common.sh@296 -- # x722=() 00:19:25.261 06:59:46 -- nvmf/common.sh@296 -- # local -ga x722 00:19:25.261 06:59:46 -- nvmf/common.sh@297 -- # mlx=() 00:19:25.261 06:59:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:25.261 06:59:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.261 06:59:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.261 06:59:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.261 06:59:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.261 06:59:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.261 06:59:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.261 06:59:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.261 06:59:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.261 06:59:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.261 06:59:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.261 06:59:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.261 06:59:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:25.261 06:59:46 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:25.261 06:59:46 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:25.261 06:59:46 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:25.261 06:59:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:25.261 06:59:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:25.261 06:59:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:25.261 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:25.261 06:59:46 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:25.261 06:59:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:25.261 06:59:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:25.261 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:25.261 06:59:46 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:25.261 06:59:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:25.261 06:59:46 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:25.261 06:59:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.261 06:59:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:25.261 06:59:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.261 06:59:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:25.261 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:25.261 06:59:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.261 06:59:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:25.261 06:59:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.261 06:59:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:25.261 06:59:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.261 06:59:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:25.261 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:25.261 06:59:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.261 06:59:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:25.261 06:59:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:25.261 06:59:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:25.261 06:59:46 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:25.261 06:59:46 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:25.261 06:59:46 -- nvmf/common.sh@57 -- # uname 00:19:25.261 06:59:46 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:25.261 06:59:46 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:25.261 06:59:46 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:25.261 06:59:46 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:25.261 06:59:46 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:25.261 06:59:46 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:25.261 06:59:46 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:25.261 06:59:46 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:25.261 06:59:46 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:25.261 06:59:46 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:25.262 06:59:46 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:25.262 06:59:46 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:25.262 06:59:46 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:25.262 06:59:46 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:25.262 06:59:46 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:25.262 06:59:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:25.262 06:59:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:25.262 06:59:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.262 06:59:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:25.262 06:59:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:25.262 06:59:46 -- nvmf/common.sh@104 -- # continue 2 00:19:25.262 06:59:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:25.262 06:59:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.262 06:59:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:25.262 06:59:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.262 06:59:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:25.262 06:59:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:25.262 06:59:46 -- nvmf/common.sh@104 -- # continue 2 00:19:25.262 06:59:46 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:25.262 06:59:46 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:25.262 06:59:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:25.262 06:59:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:25.262 06:59:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:25.262 06:59:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:25.262 06:59:46 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:25.262 06:59:46 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:25.262 06:59:46 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:25.262 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:25.262 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:25.262 altname enp217s0f0np0 00:19:25.262 altname ens818f0np0 00:19:25.262 inet 192.168.100.8/24 scope global mlx_0_0 00:19:25.262 valid_lft forever preferred_lft forever 00:19:25.262 06:59:46 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:25.262 06:59:46 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:25.262 06:59:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:25.262 06:59:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:25.262 06:59:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:25.262 06:59:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:25.262 06:59:46 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:25.262 06:59:46 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:25.262 06:59:46 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:25.262 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:25.262 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:25.262 altname enp217s0f1np1 00:19:25.262 altname ens818f1np1 00:19:25.262 inet 192.168.100.9/24 scope global mlx_0_1 00:19:25.262 valid_lft forever preferred_lft forever 00:19:25.262 06:59:46 -- nvmf/common.sh@410 -- # return 0 00:19:25.262 06:59:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:25.262 06:59:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:25.262 06:59:46 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:25.262 06:59:46 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:25.262 06:59:46 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:25.262 06:59:46 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:25.262 06:59:46 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:25.262 06:59:46 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:25.262 06:59:46 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:25.262 06:59:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:25.262 06:59:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:25.262 06:59:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.262 06:59:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:25.262 06:59:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:25.262 06:59:46 -- nvmf/common.sh@104 -- # continue 2 00:19:25.262 06:59:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:25.262 06:59:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.262 06:59:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:25.262 06:59:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.262 06:59:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:25.262 06:59:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:25.262 06:59:46 -- nvmf/common.sh@104 -- # continue 2 00:19:25.262 06:59:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:25.262 06:59:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:25.262 06:59:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:25.262 06:59:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:25.262 06:59:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:25.262 06:59:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:25.262 06:59:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:25.262 06:59:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:25.262 06:59:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:25.262 06:59:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:25.262 06:59:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:25.262 06:59:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:25.262 06:59:46 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:25.262 192.168.100.9' 00:19:25.262 06:59:46 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:25.262 192.168.100.9' 00:19:25.262 06:59:46 -- nvmf/common.sh@445 -- # head -n 1 00:19:25.262 06:59:46 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:25.262 06:59:46 -- nvmf/common.sh@446 -- # head -n 1 00:19:25.262 06:59:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:25.262 192.168.100.9' 00:19:25.262 06:59:46 -- nvmf/common.sh@446 -- # tail -n +2 00:19:25.262 06:59:46 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:25.262 06:59:46 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:25.262 06:59:46 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:25.262 06:59:46 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:25.262 06:59:46 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:25.262 06:59:46 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:25.262 06:59:46 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:25.262 06:59:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:25.262 06:59:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:25.262 06:59:46 -- common/autotest_common.sh@10 -- # set +x 00:19:25.262 06:59:46 -- nvmf/common.sh@469 -- # nvmfpid=1371934 00:19:25.262 06:59:46 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:25.262 06:59:46 -- nvmf/common.sh@470 -- # waitforlisten 1371934 00:19:25.262 06:59:46 -- common/autotest_common.sh@829 -- # '[' -z 1371934 ']' 00:19:25.262 06:59:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.262 06:59:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.262 06:59:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.262 06:59:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.262 06:59:46 -- common/autotest_common.sh@10 -- # set +x 00:19:25.262 [2024-12-15 06:59:46.295799] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:25.262 [2024-12-15 06:59:46.295848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.262 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.262 [2024-12-15 06:59:46.367303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.262 [2024-12-15 06:59:46.403302] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:25.262 [2024-12-15 06:59:46.403413] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.262 [2024-12-15 06:59:46.403423] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.262 [2024-12-15 06:59:46.403432] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.262 [2024-12-15 06:59:46.403452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.521 06:59:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.521 06:59:47 -- common/autotest_common.sh@862 -- # return 0 00:19:25.521 06:59:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:25.521 06:59:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:25.521 06:59:47 -- common/autotest_common.sh@10 -- # set +x 00:19:25.521 06:59:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.521 06:59:47 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:19:25.521 06:59:47 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:19:25.521 Unsupported transport: rdma 00:19:25.521 06:59:47 -- target/zcopy.sh@17 -- # exit 0 00:19:25.521 06:59:47 -- target/zcopy.sh@1 -- # process_shm --id 0 00:19:25.521 06:59:47 -- common/autotest_common.sh@806 -- # type=--id 00:19:25.521 06:59:47 -- common/autotest_common.sh@807 -- # id=0 00:19:25.521 06:59:47 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:25.521 06:59:47 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:25.521 06:59:47 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:25.521 06:59:47 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:25.521 06:59:47 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:25.521 06:59:47 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:25.780 nvmf_trace.0 00:19:25.780 06:59:47 -- common/autotest_common.sh@821 -- # return 0 00:19:25.780 06:59:47 -- target/zcopy.sh@1 -- # nvmftestfini 00:19:25.780 06:59:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:25.780 06:59:47 -- nvmf/common.sh@116 -- # sync 00:19:25.780 06:59:47 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:25.780 06:59:47 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:25.780 06:59:47 -- nvmf/common.sh@119 -- # set +e 00:19:25.780 06:59:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:25.780 06:59:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:25.780 rmmod nvme_rdma 00:19:25.780 rmmod nvme_fabrics 00:19:25.780 06:59:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:25.780 06:59:47 -- nvmf/common.sh@123 -- # set -e 00:19:25.780 06:59:47 -- nvmf/common.sh@124 -- # return 0 00:19:25.780 06:59:47 -- nvmf/common.sh@477 -- # '[' -n 1371934 ']' 00:19:25.780 06:59:47 -- nvmf/common.sh@478 -- # killprocess 1371934 00:19:25.780 06:59:47 -- common/autotest_common.sh@936 -- # '[' -z 1371934 ']' 00:19:25.780 06:59:47 -- common/autotest_common.sh@940 -- # kill -0 1371934 00:19:25.780 06:59:47 -- common/autotest_common.sh@941 -- # uname 00:19:25.780 06:59:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:25.780 06:59:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1371934 00:19:25.780 06:59:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:25.780 06:59:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:25.780 06:59:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1371934' 00:19:25.780 killing process with pid 1371934 00:19:25.780 06:59:47 -- common/autotest_common.sh@955 -- # kill 1371934 00:19:25.780 06:59:47 -- common/autotest_common.sh@960 -- # wait 1371934 00:19:26.039 06:59:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:26.039 06:59:47 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:26.039 00:19:26.039 real 0m7.454s 00:19:26.039 user 0m3.089s 00:19:26.039 sys 0m5.001s 00:19:26.039 06:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:26.039 06:59:47 -- common/autotest_common.sh@10 -- # set +x 00:19:26.039 ************************************ 00:19:26.039 END TEST nvmf_zcopy 00:19:26.039 ************************************ 00:19:26.039 06:59:47 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:26.039 06:59:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:26.039 06:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:26.039 06:59:47 -- common/autotest_common.sh@10 -- # set +x 00:19:26.039 ************************************ 00:19:26.039 START TEST nvmf_nmic 00:19:26.039 ************************************ 00:19:26.039 06:59:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:26.039 * Looking for test storage... 00:19:26.039 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:26.039 06:59:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:26.039 06:59:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:26.039 06:59:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:26.298 06:59:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:26.298 06:59:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:26.298 06:59:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:26.298 06:59:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:26.298 06:59:47 -- scripts/common.sh@335 -- # IFS=.-: 00:19:26.298 06:59:47 -- scripts/common.sh@335 -- # read -ra ver1 00:19:26.298 06:59:47 -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.298 06:59:47 -- scripts/common.sh@336 -- # read -ra ver2 00:19:26.298 06:59:47 -- scripts/common.sh@337 -- # local 'op=<' 00:19:26.298 06:59:47 -- scripts/common.sh@339 -- # ver1_l=2 00:19:26.298 06:59:47 -- scripts/common.sh@340 -- # ver2_l=1 00:19:26.298 06:59:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:26.298 06:59:47 -- scripts/common.sh@343 -- # case "$op" in 00:19:26.298 06:59:47 -- scripts/common.sh@344 -- # : 1 00:19:26.298 06:59:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:26.298 06:59:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.298 06:59:47 -- scripts/common.sh@364 -- # decimal 1 00:19:26.298 06:59:47 -- scripts/common.sh@352 -- # local d=1 00:19:26.298 06:59:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.298 06:59:47 -- scripts/common.sh@354 -- # echo 1 00:19:26.298 06:59:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:26.298 06:59:47 -- scripts/common.sh@365 -- # decimal 2 00:19:26.298 06:59:47 -- scripts/common.sh@352 -- # local d=2 00:19:26.298 06:59:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.298 06:59:47 -- scripts/common.sh@354 -- # echo 2 00:19:26.298 06:59:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:26.298 06:59:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:26.298 06:59:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:26.298 06:59:47 -- scripts/common.sh@367 -- # return 0 00:19:26.298 06:59:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.298 06:59:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:26.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.298 --rc genhtml_branch_coverage=1 00:19:26.298 --rc genhtml_function_coverage=1 00:19:26.298 --rc genhtml_legend=1 00:19:26.298 --rc geninfo_all_blocks=1 00:19:26.298 --rc geninfo_unexecuted_blocks=1 00:19:26.298 00:19:26.298 ' 00:19:26.298 06:59:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:26.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.298 --rc genhtml_branch_coverage=1 00:19:26.298 --rc genhtml_function_coverage=1 00:19:26.298 --rc genhtml_legend=1 00:19:26.298 --rc geninfo_all_blocks=1 00:19:26.298 --rc geninfo_unexecuted_blocks=1 00:19:26.298 00:19:26.298 ' 00:19:26.298 06:59:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:26.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.298 --rc genhtml_branch_coverage=1 00:19:26.298 --rc genhtml_function_coverage=1 00:19:26.298 --rc genhtml_legend=1 00:19:26.298 --rc geninfo_all_blocks=1 00:19:26.298 --rc geninfo_unexecuted_blocks=1 00:19:26.298 00:19:26.298 ' 00:19:26.298 06:59:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:26.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.298 --rc genhtml_branch_coverage=1 00:19:26.298 --rc genhtml_function_coverage=1 00:19:26.298 --rc genhtml_legend=1 00:19:26.298 --rc geninfo_all_blocks=1 00:19:26.298 --rc geninfo_unexecuted_blocks=1 00:19:26.298 00:19:26.298 ' 00:19:26.298 06:59:47 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.298 06:59:47 -- nvmf/common.sh@7 -- # uname -s 00:19:26.298 06:59:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.298 06:59:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.298 06:59:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.298 06:59:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.298 06:59:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.298 06:59:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.298 06:59:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.298 06:59:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.298 06:59:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.298 06:59:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.298 06:59:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:26.298 06:59:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:26.298 06:59:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.298 06:59:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.298 06:59:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.298 06:59:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:26.298 06:59:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.298 06:59:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.298 06:59:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.298 06:59:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.298 06:59:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.298 06:59:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.298 06:59:47 -- paths/export.sh@5 -- # export PATH 00:19:26.298 06:59:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.298 06:59:47 -- nvmf/common.sh@46 -- # : 0 00:19:26.298 06:59:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:26.298 06:59:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:26.298 06:59:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:26.298 06:59:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.298 06:59:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.298 06:59:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:26.298 06:59:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:26.298 06:59:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:26.298 06:59:47 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:26.298 06:59:47 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:26.298 06:59:47 -- target/nmic.sh@14 -- # nvmftestinit 00:19:26.298 06:59:47 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:26.298 06:59:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.298 06:59:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:26.298 06:59:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:26.298 06:59:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:26.298 06:59:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.298 06:59:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.298 06:59:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.298 06:59:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:26.298 06:59:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:26.298 06:59:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:26.298 06:59:47 -- common/autotest_common.sh@10 -- # set +x 00:19:32.939 06:59:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:32.939 06:59:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:32.939 06:59:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:32.939 06:59:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:32.939 06:59:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:32.939 06:59:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:32.939 06:59:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:32.939 06:59:54 -- nvmf/common.sh@294 -- # net_devs=() 00:19:32.939 06:59:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:32.939 06:59:54 -- nvmf/common.sh@295 -- # e810=() 00:19:32.939 06:59:54 -- nvmf/common.sh@295 -- # local -ga e810 00:19:32.939 06:59:54 -- nvmf/common.sh@296 -- # x722=() 00:19:32.939 06:59:54 -- nvmf/common.sh@296 -- # local -ga x722 00:19:32.939 06:59:54 -- nvmf/common.sh@297 -- # mlx=() 00:19:32.939 06:59:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:32.939 06:59:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.939 06:59:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.939 06:59:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.939 06:59:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.939 06:59:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.939 06:59:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.939 06:59:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.939 06:59:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.939 06:59:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.939 06:59:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.939 06:59:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.940 06:59:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:32.940 06:59:54 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:32.940 06:59:54 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:32.940 06:59:54 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:32.940 06:59:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:32.940 06:59:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:32.940 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:32.940 06:59:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:32.940 06:59:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:32.940 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:32.940 06:59:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:32.940 06:59:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:32.940 06:59:54 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.940 06:59:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:32.940 06:59:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.940 06:59:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:32.940 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:32.940 06:59:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.940 06:59:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.940 06:59:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:32.940 06:59:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.940 06:59:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:32.940 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:32.940 06:59:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.940 06:59:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:32.940 06:59:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:32.940 06:59:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:32.940 06:59:54 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:32.940 06:59:54 -- nvmf/common.sh@57 -- # uname 00:19:32.940 06:59:54 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:32.940 06:59:54 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:32.940 06:59:54 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:32.940 06:59:54 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:32.940 06:59:54 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:32.940 06:59:54 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:32.940 06:59:54 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:32.940 06:59:54 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:32.940 06:59:54 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:32.940 06:59:54 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:32.940 06:59:54 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:32.940 06:59:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:32.940 06:59:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:32.940 06:59:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:32.940 06:59:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:32.940 06:59:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:32.940 06:59:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:32.940 06:59:54 -- nvmf/common.sh@104 -- # continue 2 00:19:32.940 06:59:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:32.940 06:59:54 -- nvmf/common.sh@104 -- # continue 2 00:19:32.940 06:59:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:32.940 06:59:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:32.940 06:59:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:32.940 06:59:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:32.940 06:59:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:32.940 06:59:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:32.940 06:59:54 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:32.940 06:59:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:32.940 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:32.940 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:32.940 altname enp217s0f0np0 00:19:32.940 altname ens818f0np0 00:19:32.940 inet 192.168.100.8/24 scope global mlx_0_0 00:19:32.940 valid_lft forever preferred_lft forever 00:19:32.940 06:59:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:32.940 06:59:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:32.940 06:59:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:32.940 06:59:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:32.940 06:59:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:32.940 06:59:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:32.940 06:59:54 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:32.940 06:59:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:32.940 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:32.940 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:32.940 altname enp217s0f1np1 00:19:32.940 altname ens818f1np1 00:19:32.940 inet 192.168.100.9/24 scope global mlx_0_1 00:19:32.940 valid_lft forever preferred_lft forever 00:19:32.940 06:59:54 -- nvmf/common.sh@410 -- # return 0 00:19:32.940 06:59:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:32.940 06:59:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:32.940 06:59:54 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:32.940 06:59:54 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:32.940 06:59:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:32.940 06:59:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:32.940 06:59:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:32.940 06:59:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:32.940 06:59:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:32.940 06:59:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:32.940 06:59:54 -- nvmf/common.sh@104 -- # continue 2 00:19:32.940 06:59:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:32.940 06:59:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:32.940 06:59:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.199 06:59:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:33.199 06:59:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:33.199 06:59:54 -- nvmf/common.sh@104 -- # continue 2 00:19:33.199 06:59:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:33.199 06:59:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:33.199 06:59:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:33.199 06:59:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:33.199 06:59:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:33.199 06:59:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:33.199 06:59:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:33.199 06:59:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:33.199 06:59:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:33.199 06:59:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:33.199 06:59:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:33.199 06:59:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:33.199 06:59:54 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:33.199 192.168.100.9' 00:19:33.199 06:59:54 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:33.199 192.168.100.9' 00:19:33.199 06:59:54 -- nvmf/common.sh@445 -- # head -n 1 00:19:33.199 06:59:54 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:33.199 06:59:54 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:33.199 192.168.100.9' 00:19:33.199 06:59:54 -- nvmf/common.sh@446 -- # tail -n +2 00:19:33.199 06:59:54 -- nvmf/common.sh@446 -- # head -n 1 00:19:33.199 06:59:54 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:33.199 06:59:54 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:33.199 06:59:54 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:33.199 06:59:54 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:33.199 06:59:54 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:33.199 06:59:54 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:33.199 06:59:54 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:33.199 06:59:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:33.199 06:59:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:33.199 06:59:54 -- common/autotest_common.sh@10 -- # set +x 00:19:33.199 06:59:54 -- nvmf/common.sh@469 -- # nvmfpid=1375612 00:19:33.199 06:59:54 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:33.199 06:59:54 -- nvmf/common.sh@470 -- # waitforlisten 1375612 00:19:33.199 06:59:54 -- common/autotest_common.sh@829 -- # '[' -z 1375612 ']' 00:19:33.199 06:59:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.199 06:59:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.199 06:59:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.199 06:59:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.199 06:59:54 -- common/autotest_common.sh@10 -- # set +x 00:19:33.199 [2024-12-15 06:59:54.701924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:33.200 [2024-12-15 06:59:54.701974] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.200 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.200 [2024-12-15 06:59:54.773938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:33.200 [2024-12-15 06:59:54.812636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:33.200 [2024-12-15 06:59:54.812740] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.200 [2024-12-15 06:59:54.812749] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.200 [2024-12-15 06:59:54.812758] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.200 [2024-12-15 06:59:54.812802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.200 [2024-12-15 06:59:54.812822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.200 [2024-12-15 06:59:54.812908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:33.200 [2024-12-15 06:59:54.812910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.135 06:59:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.135 06:59:55 -- common/autotest_common.sh@862 -- # return 0 00:19:34.135 06:59:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:34.135 06:59:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:34.135 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:19:34.135 06:59:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.135 06:59:55 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:34.135 06:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.135 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:19:34.135 [2024-12-15 06:59:55.586299] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9420d0/0x9465a0) succeed. 00:19:34.135 [2024-12-15 06:59:55.595475] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x943670/0x987c40) succeed. 00:19:34.135 06:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.135 06:59:55 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:34.135 06:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.135 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:19:34.135 Malloc0 00:19:34.135 06:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.135 06:59:55 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:34.135 06:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.135 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:19:34.135 06:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.135 06:59:55 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:34.135 06:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.135 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:19:34.135 06:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.135 06:59:55 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:34.135 06:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.135 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:19:34.135 [2024-12-15 06:59:55.765117] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:34.135 06:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.135 06:59:55 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:34.135 test case1: single bdev can't be used in multiple subsystems 00:19:34.135 06:59:55 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:34.135 06:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.135 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:19:34.393 06:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.393 06:59:55 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:34.393 06:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.393 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:19:34.393 06:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.393 06:59:55 -- target/nmic.sh@28 -- # nmic_status=0 00:19:34.394 06:59:55 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:34.394 06:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.394 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:19:34.394 [2024-12-15 06:59:55.788919] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:34.394 [2024-12-15 06:59:55.788941] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:34.394 [2024-12-15 06:59:55.788951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:34.394 request: 00:19:34.394 { 00:19:34.394 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:34.394 "namespace": { 00:19:34.394 "bdev_name": "Malloc0" 00:19:34.394 }, 00:19:34.394 "method": "nvmf_subsystem_add_ns", 00:19:34.394 "req_id": 1 00:19:34.394 } 00:19:34.394 Got JSON-RPC error response 00:19:34.394 response: 00:19:34.394 { 00:19:34.394 "code": -32602, 00:19:34.394 "message": "Invalid parameters" 00:19:34.394 } 00:19:34.394 06:59:55 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:34.394 06:59:55 -- target/nmic.sh@29 -- # nmic_status=1 00:19:34.394 06:59:55 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:34.394 06:59:55 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:34.394 Adding namespace failed - expected result. 00:19:34.394 06:59:55 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:34.394 test case2: host connect to nvmf target in multiple paths 00:19:34.394 06:59:55 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:34.394 06:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.394 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:19:34.394 [2024-12-15 06:59:55.800996] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:34.394 06:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.394 06:59:55 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:35.329 06:59:56 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:19:36.265 06:59:57 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:36.265 06:59:57 -- common/autotest_common.sh@1187 -- # local i=0 00:19:36.265 06:59:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:36.265 06:59:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:36.265 06:59:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:38.169 06:59:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:38.169 06:59:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:38.169 06:59:59 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:38.169 06:59:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:38.169 06:59:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:38.169 06:59:59 -- common/autotest_common.sh@1197 -- # return 0 00:19:38.169 06:59:59 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:38.452 [global] 00:19:38.452 thread=1 00:19:38.452 invalidate=1 00:19:38.452 rw=write 00:19:38.452 time_based=1 00:19:38.452 runtime=1 00:19:38.452 ioengine=libaio 00:19:38.452 direct=1 00:19:38.452 bs=4096 00:19:38.452 iodepth=1 00:19:38.452 norandommap=0 00:19:38.452 numjobs=1 00:19:38.452 00:19:38.452 verify_dump=1 00:19:38.452 verify_backlog=512 00:19:38.452 verify_state_save=0 00:19:38.452 do_verify=1 00:19:38.452 verify=crc32c-intel 00:19:38.452 [job0] 00:19:38.452 filename=/dev/nvme0n1 00:19:38.452 Could not set queue depth (nvme0n1) 00:19:38.722 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:38.722 fio-3.35 00:19:38.722 Starting 1 thread 00:19:39.659 00:19:39.659 job0: (groupid=0, jobs=1): err= 0: pid=1376623: Sun Dec 15 07:00:01 2024 00:19:39.659 read: IOPS=7143, BW=27.9MiB/s (29.3MB/s)(27.9MiB/1001msec) 00:19:39.659 slat (nsec): min=8268, max=34264, avg=8830.35, stdev=920.87 00:19:39.659 clat (usec): min=44, max=226, avg=58.27, stdev= 4.36 00:19:39.659 lat (usec): min=57, max=235, avg=67.10, stdev= 4.43 00:19:39.659 clat percentiles (usec): 00:19:39.659 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:19:39.659 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:19:39.659 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 64], 95.00th=[ 66], 00:19:39.659 | 99.00th=[ 70], 99.50th=[ 71], 99.90th=[ 74], 99.95th=[ 76], 00:19:39.659 | 99.99th=[ 227] 00:19:39.659 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:19:39.659 slat (nsec): min=8502, max=42796, avg=11439.50, stdev=1266.56 00:19:39.659 clat (usec): min=31, max=175, avg=55.72, stdev= 4.21 00:19:39.659 lat (usec): min=55, max=186, avg=67.16, stdev= 4.41 00:19:39.659 clat percentiles (usec): 00:19:39.659 | 1.00th=[ 49], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:19:39.659 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:19:39.659 | 70.00th=[ 58], 80.00th=[ 59], 90.00th=[ 61], 95.00th=[ 63], 00:19:39.659 | 99.00th=[ 67], 99.50th=[ 69], 99.90th=[ 78], 99.95th=[ 82], 00:19:39.659 | 99.99th=[ 176] 00:19:39.659 bw ( KiB/s): min=29120, max=29120, per=100.00%, avg=29120.00, stdev= 0.00, samples=1 00:19:39.659 iops : min= 7280, max= 7280, avg=7280.00, stdev= 0.00, samples=1 00:19:39.659 lat (usec) : 50=2.63%, 100=97.36%, 250=0.01% 00:19:39.659 cpu : usr=12.90%, sys=17.40%, ctx=14319, majf=0, minf=1 00:19:39.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.660 issued rwts: total=7151,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.660 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:39.660 00:19:39.660 Run status group 0 (all jobs): 00:19:39.660 READ: bw=27.9MiB/s (29.3MB/s), 27.9MiB/s-27.9MiB/s (29.3MB/s-29.3MB/s), io=27.9MiB (29.3MB), run=1001-1001msec 00:19:39.660 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:19:39.660 00:19:39.660 Disk stats (read/write): 00:19:39.660 nvme0n1: ios=6315/6656, merge=0/0, ticks=310/311, in_queue=621, util=90.68% 00:19:39.660 07:00:01 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:42.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:42.192 07:00:03 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:42.192 07:00:03 -- common/autotest_common.sh@1208 -- # local i=0 00:19:42.192 07:00:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:42.192 07:00:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:42.192 07:00:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:42.192 07:00:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:42.192 07:00:03 -- common/autotest_common.sh@1220 -- # return 0 00:19:42.192 07:00:03 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:42.192 07:00:03 -- target/nmic.sh@53 -- # nvmftestfini 00:19:42.192 07:00:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:42.192 07:00:03 -- nvmf/common.sh@116 -- # sync 00:19:42.192 07:00:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:42.192 07:00:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:42.192 07:00:03 -- nvmf/common.sh@119 -- # set +e 00:19:42.192 07:00:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:42.192 07:00:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:42.192 rmmod nvme_rdma 00:19:42.192 rmmod nvme_fabrics 00:19:42.192 07:00:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:42.192 07:00:03 -- nvmf/common.sh@123 -- # set -e 00:19:42.192 07:00:03 -- nvmf/common.sh@124 -- # return 0 00:19:42.192 07:00:03 -- nvmf/common.sh@477 -- # '[' -n 1375612 ']' 00:19:42.192 07:00:03 -- nvmf/common.sh@478 -- # killprocess 1375612 00:19:42.192 07:00:03 -- common/autotest_common.sh@936 -- # '[' -z 1375612 ']' 00:19:42.192 07:00:03 -- common/autotest_common.sh@940 -- # kill -0 1375612 00:19:42.192 07:00:03 -- common/autotest_common.sh@941 -- # uname 00:19:42.192 07:00:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:42.192 07:00:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1375612 00:19:42.192 07:00:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:42.192 07:00:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:42.192 07:00:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1375612' 00:19:42.192 killing process with pid 1375612 00:19:42.192 07:00:03 -- common/autotest_common.sh@955 -- # kill 1375612 00:19:42.192 07:00:03 -- common/autotest_common.sh@960 -- # wait 1375612 00:19:42.192 07:00:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:42.192 07:00:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:42.192 00:19:42.192 real 0m16.119s 00:19:42.192 user 0m45.875s 00:19:42.192 sys 0m6.250s 00:19:42.192 07:00:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:42.192 07:00:03 -- common/autotest_common.sh@10 -- # set +x 00:19:42.192 ************************************ 00:19:42.192 END TEST nvmf_nmic 00:19:42.192 ************************************ 00:19:42.192 07:00:03 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:42.192 07:00:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:42.192 07:00:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:42.192 07:00:03 -- common/autotest_common.sh@10 -- # set +x 00:19:42.192 ************************************ 00:19:42.192 START TEST nvmf_fio_target 00:19:42.192 ************************************ 00:19:42.192 07:00:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:42.192 * Looking for test storage... 00:19:42.192 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:42.192 07:00:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:42.192 07:00:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:42.192 07:00:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:42.192 07:00:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:42.192 07:00:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:42.192 07:00:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:42.192 07:00:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:42.192 07:00:03 -- scripts/common.sh@335 -- # IFS=.-: 00:19:42.192 07:00:03 -- scripts/common.sh@335 -- # read -ra ver1 00:19:42.192 07:00:03 -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.192 07:00:03 -- scripts/common.sh@336 -- # read -ra ver2 00:19:42.192 07:00:03 -- scripts/common.sh@337 -- # local 'op=<' 00:19:42.192 07:00:03 -- scripts/common.sh@339 -- # ver1_l=2 00:19:42.192 07:00:03 -- scripts/common.sh@340 -- # ver2_l=1 00:19:42.192 07:00:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:42.192 07:00:03 -- scripts/common.sh@343 -- # case "$op" in 00:19:42.192 07:00:03 -- scripts/common.sh@344 -- # : 1 00:19:42.192 07:00:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:42.192 07:00:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.192 07:00:03 -- scripts/common.sh@364 -- # decimal 1 00:19:42.192 07:00:03 -- scripts/common.sh@352 -- # local d=1 00:19:42.192 07:00:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.451 07:00:03 -- scripts/common.sh@354 -- # echo 1 00:19:42.451 07:00:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:42.451 07:00:03 -- scripts/common.sh@365 -- # decimal 2 00:19:42.451 07:00:03 -- scripts/common.sh@352 -- # local d=2 00:19:42.451 07:00:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.451 07:00:03 -- scripts/common.sh@354 -- # echo 2 00:19:42.451 07:00:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:42.451 07:00:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:42.451 07:00:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:42.451 07:00:03 -- scripts/common.sh@367 -- # return 0 00:19:42.451 07:00:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.451 07:00:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:42.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.451 --rc genhtml_branch_coverage=1 00:19:42.451 --rc genhtml_function_coverage=1 00:19:42.451 --rc genhtml_legend=1 00:19:42.451 --rc geninfo_all_blocks=1 00:19:42.451 --rc geninfo_unexecuted_blocks=1 00:19:42.451 00:19:42.451 ' 00:19:42.451 07:00:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:42.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.451 --rc genhtml_branch_coverage=1 00:19:42.451 --rc genhtml_function_coverage=1 00:19:42.451 --rc genhtml_legend=1 00:19:42.451 --rc geninfo_all_blocks=1 00:19:42.451 --rc geninfo_unexecuted_blocks=1 00:19:42.451 00:19:42.451 ' 00:19:42.451 07:00:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:42.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.451 --rc genhtml_branch_coverage=1 00:19:42.451 --rc genhtml_function_coverage=1 00:19:42.451 --rc genhtml_legend=1 00:19:42.451 --rc geninfo_all_blocks=1 00:19:42.451 --rc geninfo_unexecuted_blocks=1 00:19:42.451 00:19:42.451 ' 00:19:42.451 07:00:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:42.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.451 --rc genhtml_branch_coverage=1 00:19:42.451 --rc genhtml_function_coverage=1 00:19:42.451 --rc genhtml_legend=1 00:19:42.451 --rc geninfo_all_blocks=1 00:19:42.451 --rc geninfo_unexecuted_blocks=1 00:19:42.451 00:19:42.451 ' 00:19:42.451 07:00:03 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.451 07:00:03 -- nvmf/common.sh@7 -- # uname -s 00:19:42.452 07:00:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.452 07:00:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.452 07:00:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.452 07:00:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.452 07:00:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.452 07:00:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.452 07:00:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.452 07:00:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.452 07:00:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.452 07:00:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.452 07:00:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:42.452 07:00:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:42.452 07:00:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.452 07:00:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.452 07:00:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.452 07:00:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:42.452 07:00:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.452 07:00:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.452 07:00:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.452 07:00:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.452 07:00:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.452 07:00:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.452 07:00:03 -- paths/export.sh@5 -- # export PATH 00:19:42.452 07:00:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.452 07:00:03 -- nvmf/common.sh@46 -- # : 0 00:19:42.452 07:00:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:42.452 07:00:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:42.452 07:00:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:42.452 07:00:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.452 07:00:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.452 07:00:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:42.452 07:00:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:42.452 07:00:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:42.452 07:00:03 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:42.452 07:00:03 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:42.452 07:00:03 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:42.452 07:00:03 -- target/fio.sh@16 -- # nvmftestinit 00:19:42.452 07:00:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:42.452 07:00:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.452 07:00:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:42.452 07:00:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:42.452 07:00:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:42.452 07:00:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.452 07:00:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.452 07:00:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.452 07:00:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:42.452 07:00:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:42.452 07:00:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:42.452 07:00:03 -- common/autotest_common.sh@10 -- # set +x 00:19:49.022 07:00:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:49.022 07:00:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:49.022 07:00:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:49.022 07:00:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:49.022 07:00:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:49.022 07:00:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:49.022 07:00:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:49.022 07:00:09 -- nvmf/common.sh@294 -- # net_devs=() 00:19:49.022 07:00:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:49.022 07:00:09 -- nvmf/common.sh@295 -- # e810=() 00:19:49.022 07:00:09 -- nvmf/common.sh@295 -- # local -ga e810 00:19:49.022 07:00:09 -- nvmf/common.sh@296 -- # x722=() 00:19:49.022 07:00:09 -- nvmf/common.sh@296 -- # local -ga x722 00:19:49.022 07:00:09 -- nvmf/common.sh@297 -- # mlx=() 00:19:49.022 07:00:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:49.022 07:00:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.022 07:00:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.022 07:00:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.023 07:00:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.023 07:00:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.023 07:00:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.023 07:00:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.023 07:00:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.023 07:00:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.023 07:00:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.023 07:00:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.023 07:00:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:49.023 07:00:09 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:49.023 07:00:09 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:49.023 07:00:09 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:49.023 07:00:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:49.023 07:00:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.023 07:00:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:49.023 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:49.023 07:00:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:49.023 07:00:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.023 07:00:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:49.023 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:49.023 07:00:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:49.023 07:00:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:49.023 07:00:09 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.023 07:00:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.023 07:00:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.023 07:00:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.023 07:00:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:49.023 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:49.023 07:00:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.023 07:00:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.023 07:00:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.023 07:00:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.023 07:00:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.023 07:00:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:49.023 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:49.023 07:00:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.023 07:00:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:49.023 07:00:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:49.023 07:00:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:49.023 07:00:09 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:49.023 07:00:09 -- nvmf/common.sh@57 -- # uname 00:19:49.023 07:00:09 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:49.023 07:00:09 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:49.023 07:00:09 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:49.023 07:00:09 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:49.023 07:00:09 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:49.023 07:00:09 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:49.023 07:00:09 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:49.023 07:00:09 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:49.023 07:00:09 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:49.023 07:00:09 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:49.023 07:00:09 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:49.023 07:00:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:49.023 07:00:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:49.023 07:00:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:49.023 07:00:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:49.023 07:00:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:49.023 07:00:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:49.023 07:00:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.023 07:00:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:49.023 07:00:09 -- nvmf/common.sh@104 -- # continue 2 00:19:49.023 07:00:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:49.023 07:00:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.023 07:00:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:49.023 07:00:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.023 07:00:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:49.023 07:00:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:49.023 07:00:10 -- nvmf/common.sh@104 -- # continue 2 00:19:49.023 07:00:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:49.023 07:00:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:49.023 07:00:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:49.023 07:00:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:49.023 07:00:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:49.023 07:00:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:49.023 07:00:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:49.023 07:00:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:49.023 07:00:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:49.023 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:49.023 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:49.023 altname enp217s0f0np0 00:19:49.023 altname ens818f0np0 00:19:49.023 inet 192.168.100.8/24 scope global mlx_0_0 00:19:49.023 valid_lft forever preferred_lft forever 00:19:49.023 07:00:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:49.023 07:00:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:49.023 07:00:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:49.023 07:00:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:49.023 07:00:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:49.023 07:00:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:49.023 07:00:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:49.023 07:00:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:49.023 07:00:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:49.023 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:49.023 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:49.023 altname enp217s0f1np1 00:19:49.023 altname ens818f1np1 00:19:49.023 inet 192.168.100.9/24 scope global mlx_0_1 00:19:49.023 valid_lft forever preferred_lft forever 00:19:49.023 07:00:10 -- nvmf/common.sh@410 -- # return 0 00:19:49.023 07:00:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:49.023 07:00:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:49.023 07:00:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:49.023 07:00:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:49.023 07:00:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:49.023 07:00:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:49.023 07:00:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:49.023 07:00:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:49.023 07:00:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:49.023 07:00:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:49.023 07:00:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:49.023 07:00:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.023 07:00:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:49.023 07:00:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:49.023 07:00:10 -- nvmf/common.sh@104 -- # continue 2 00:19:49.023 07:00:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:49.023 07:00:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.023 07:00:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:49.023 07:00:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.023 07:00:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:49.023 07:00:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:49.023 07:00:10 -- nvmf/common.sh@104 -- # continue 2 00:19:49.023 07:00:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:49.023 07:00:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:49.023 07:00:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:49.023 07:00:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:49.023 07:00:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:49.023 07:00:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:49.023 07:00:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:49.024 07:00:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:49.024 07:00:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:49.024 07:00:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:49.024 07:00:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:49.024 07:00:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:49.024 07:00:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:49.024 192.168.100.9' 00:19:49.024 07:00:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:49.024 192.168.100.9' 00:19:49.024 07:00:10 -- nvmf/common.sh@445 -- # head -n 1 00:19:49.024 07:00:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:49.024 07:00:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:49.024 192.168.100.9' 00:19:49.024 07:00:10 -- nvmf/common.sh@446 -- # tail -n +2 00:19:49.024 07:00:10 -- nvmf/common.sh@446 -- # head -n 1 00:19:49.024 07:00:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:49.024 07:00:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:49.024 07:00:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:49.024 07:00:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:49.024 07:00:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:49.024 07:00:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:49.024 07:00:10 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:49.024 07:00:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:49.024 07:00:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.024 07:00:10 -- common/autotest_common.sh@10 -- # set +x 00:19:49.024 07:00:10 -- nvmf/common.sh@469 -- # nvmfpid=1381059 00:19:49.024 07:00:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:49.024 07:00:10 -- nvmf/common.sh@470 -- # waitforlisten 1381059 00:19:49.024 07:00:10 -- common/autotest_common.sh@829 -- # '[' -z 1381059 ']' 00:19:49.024 07:00:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.024 07:00:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.024 07:00:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.024 07:00:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.024 07:00:10 -- common/autotest_common.sh@10 -- # set +x 00:19:49.024 [2024-12-15 07:00:10.200432] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:49.024 [2024-12-15 07:00:10.200484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.024 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.024 [2024-12-15 07:00:10.271130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.024 [2024-12-15 07:00:10.308914] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:49.024 [2024-12-15 07:00:10.309033] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.024 [2024-12-15 07:00:10.309043] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.024 [2024-12-15 07:00:10.309052] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.024 [2024-12-15 07:00:10.309098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.024 [2024-12-15 07:00:10.309203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.024 [2024-12-15 07:00:10.309287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.024 [2024-12-15 07:00:10.309289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.592 07:00:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.592 07:00:11 -- common/autotest_common.sh@862 -- # return 0 00:19:49.592 07:00:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:49.592 07:00:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.592 07:00:11 -- common/autotest_common.sh@10 -- # set +x 00:19:49.592 07:00:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.592 07:00:11 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:49.851 [2024-12-15 07:00:11.259987] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13000d0/0x13045a0) succeed. 00:19:49.851 [2024-12-15 07:00:11.269338] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1301670/0x1345c40) succeed. 00:19:49.851 07:00:11 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:50.109 07:00:11 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:50.109 07:00:11 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:50.368 07:00:11 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:50.368 07:00:11 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:50.368 07:00:11 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:50.368 07:00:11 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:50.627 07:00:12 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:50.627 07:00:12 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:50.885 07:00:12 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:51.144 07:00:12 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:51.144 07:00:12 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:51.144 07:00:12 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:51.402 07:00:12 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:51.402 07:00:12 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:51.402 07:00:12 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:51.660 07:00:13 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:51.918 07:00:13 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:51.918 07:00:13 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:51.918 07:00:13 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:51.918 07:00:13 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:52.177 07:00:13 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:52.435 [2024-12-15 07:00:13.893485] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:52.435 07:00:13 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:52.694 07:00:14 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:52.694 07:00:14 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:53.628 07:00:15 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:53.628 07:00:15 -- common/autotest_common.sh@1187 -- # local i=0 00:19:53.628 07:00:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:53.628 07:00:15 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:19:53.886 07:00:15 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:19:53.886 07:00:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:55.788 07:00:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:55.788 07:00:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:55.788 07:00:17 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:55.788 07:00:17 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:19:55.788 07:00:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:55.788 07:00:17 -- common/autotest_common.sh@1197 -- # return 0 00:19:55.788 07:00:17 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:55.788 [global] 00:19:55.788 thread=1 00:19:55.788 invalidate=1 00:19:55.788 rw=write 00:19:55.788 time_based=1 00:19:55.788 runtime=1 00:19:55.788 ioengine=libaio 00:19:55.788 direct=1 00:19:55.788 bs=4096 00:19:55.788 iodepth=1 00:19:55.788 norandommap=0 00:19:55.788 numjobs=1 00:19:55.788 00:19:55.788 verify_dump=1 00:19:55.788 verify_backlog=512 00:19:55.788 verify_state_save=0 00:19:55.788 do_verify=1 00:19:55.788 verify=crc32c-intel 00:19:55.788 [job0] 00:19:55.788 filename=/dev/nvme0n1 00:19:55.788 [job1] 00:19:55.788 filename=/dev/nvme0n2 00:19:55.788 [job2] 00:19:55.788 filename=/dev/nvme0n3 00:19:55.788 [job3] 00:19:55.788 filename=/dev/nvme0n4 00:19:55.788 Could not set queue depth (nvme0n1) 00:19:55.788 Could not set queue depth (nvme0n2) 00:19:55.788 Could not set queue depth (nvme0n3) 00:19:55.788 Could not set queue depth (nvme0n4) 00:19:56.352 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:56.352 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:56.352 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:56.352 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:56.352 fio-3.35 00:19:56.352 Starting 4 threads 00:19:57.728 00:19:57.728 job0: (groupid=0, jobs=1): err= 0: pid=1382468: Sun Dec 15 07:00:18 2024 00:19:57.728 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:19:57.728 slat (nsec): min=8425, max=20666, avg=8937.38, stdev=810.44 00:19:57.728 clat (usec): min=54, max=100, avg=74.14, stdev= 4.53 00:19:57.728 lat (usec): min=63, max=110, avg=83.07, stdev= 4.59 00:19:57.728 clat percentiles (usec): 00:19:57.728 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 71], 00:19:57.728 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 76], 00:19:57.728 | 70.00th=[ 77], 80.00th=[ 78], 90.00th=[ 81], 95.00th=[ 82], 00:19:57.728 | 99.00th=[ 86], 99.50th=[ 88], 99.90th=[ 94], 99.95th=[ 99], 00:19:57.728 | 99.99th=[ 101] 00:19:57.728 write: IOPS=6006, BW=23.5MiB/s (24.6MB/s)(23.5MiB/1001msec); 0 zone resets 00:19:57.728 slat (nsec): min=8201, max=33332, avg=11522.36, stdev=995.01 00:19:57.728 clat (usec): min=59, max=394, avg=71.80, stdev= 6.10 00:19:57.728 lat (usec): min=70, max=406, avg=83.32, stdev= 6.18 00:19:57.728 clat percentiles (usec): 00:19:57.728 | 1.00th=[ 63], 5.00th=[ 66], 10.00th=[ 67], 20.00th=[ 69], 00:19:57.728 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:19:57.728 | 70.00th=[ 75], 80.00th=[ 76], 90.00th=[ 78], 95.00th=[ 80], 00:19:57.728 | 99.00th=[ 84], 99.50th=[ 86], 99.90th=[ 92], 99.95th=[ 99], 00:19:57.728 | 99.99th=[ 396] 00:19:57.728 bw ( KiB/s): min=24526, max=24526, per=36.97%, avg=24526.00, stdev= 0.00, samples=1 00:19:57.728 iops : min= 6131, max= 6131, avg=6131.00, stdev= 0.00, samples=1 00:19:57.728 lat (usec) : 100=99.97%, 250=0.03%, 500=0.01% 00:19:57.728 cpu : usr=9.80%, sys=15.10%, ctx=11645, majf=0, minf=1 00:19:57.728 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.728 issued rwts: total=5632,6013,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.728 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:57.728 job1: (groupid=0, jobs=1): err= 0: pid=1382475: Sun Dec 15 07:00:18 2024 00:19:57.728 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:19:57.728 slat (nsec): min=8368, max=23502, avg=9267.28, stdev=943.76 00:19:57.728 clat (usec): min=66, max=208, avg=143.94, stdev=23.11 00:19:57.728 lat (usec): min=75, max=222, avg=153.21, stdev=23.16 00:19:57.728 clat percentiles (usec): 00:19:57.728 | 1.00th=[ 75], 5.00th=[ 85], 10.00th=[ 127], 20.00th=[ 135], 00:19:57.728 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:19:57.728 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 169], 95.00th=[ 184], 00:19:57.728 | 99.00th=[ 200], 99.50th=[ 202], 99.90th=[ 206], 99.95th=[ 208], 00:19:57.728 | 99.99th=[ 208] 00:19:57.728 write: IOPS=3518, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1001msec); 0 zone resets 00:19:57.728 slat (nsec): min=10268, max=38487, avg=11378.83, stdev=1232.57 00:19:57.728 clat (usec): min=59, max=277, avg=134.60, stdev=21.91 00:19:57.728 lat (usec): min=70, max=288, avg=145.98, stdev=21.99 00:19:57.728 clat percentiles (usec): 00:19:57.728 | 1.00th=[ 69], 5.00th=[ 80], 10.00th=[ 118], 20.00th=[ 126], 00:19:57.728 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:19:57.728 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 159], 95.00th=[ 172], 00:19:57.728 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 202], 99.95th=[ 229], 00:19:57.728 | 99.99th=[ 277] 00:19:57.728 bw ( KiB/s): min=14152, max=14152, per=21.33%, avg=14152.00, stdev= 0.00, samples=1 00:19:57.728 iops : min= 3538, max= 3538, avg=3538.00, stdev= 0.00, samples=1 00:19:57.728 lat (usec) : 100=7.04%, 250=92.95%, 500=0.02% 00:19:57.728 cpu : usr=5.00%, sys=9.20%, ctx=6594, majf=0, minf=1 00:19:57.728 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.728 issued rwts: total=3072,3522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.728 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:57.728 job2: (groupid=0, jobs=1): err= 0: pid=1382496: Sun Dec 15 07:00:18 2024 00:19:57.728 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:19:57.728 slat (nsec): min=8554, max=30563, avg=9165.26, stdev=1047.87 00:19:57.728 clat (usec): min=71, max=209, avg=143.87, stdev=22.01 00:19:57.728 lat (usec): min=80, max=218, avg=153.04, stdev=22.05 00:19:57.728 clat percentiles (usec): 00:19:57.728 | 1.00th=[ 78], 5.00th=[ 89], 10.00th=[ 128], 20.00th=[ 137], 00:19:57.728 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:19:57.728 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 169], 95.00th=[ 180], 00:19:57.728 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 208], 99.95th=[ 208], 00:19:57.728 | 99.99th=[ 210] 00:19:57.728 write: IOPS=3530, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1001msec); 0 zone resets 00:19:57.728 slat (nsec): min=10446, max=85260, avg=11537.50, stdev=1657.41 00:19:57.728 clat (usec): min=66, max=227, avg=134.16, stdev=20.82 00:19:57.728 lat (usec): min=77, max=238, avg=145.69, stdev=20.90 00:19:57.728 clat percentiles (usec): 00:19:57.728 | 1.00th=[ 72], 5.00th=[ 84], 10.00th=[ 118], 20.00th=[ 126], 00:19:57.728 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:19:57.728 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 159], 95.00th=[ 169], 00:19:57.728 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 196], 99.95th=[ 202], 00:19:57.728 | 99.99th=[ 229] 00:19:57.728 bw ( KiB/s): min=14184, max=14184, per=21.38%, avg=14184.00, stdev= 0.00, samples=1 00:19:57.728 iops : min= 3546, max= 3546, avg=3546.00, stdev= 0.00, samples=1 00:19:57.728 lat (usec) : 100=7.11%, 250=92.89% 00:19:57.728 cpu : usr=4.80%, sys=9.30%, ctx=6607, majf=0, minf=1 00:19:57.728 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.728 issued rwts: total=3072,3534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.728 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:57.728 job3: (groupid=0, jobs=1): err= 0: pid=1382503: Sun Dec 15 07:00:18 2024 00:19:57.728 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:19:57.728 slat (nsec): min=8492, max=19323, avg=9447.43, stdev=821.26 00:19:57.728 clat (usec): min=69, max=205, avg=143.61, stdev=22.04 00:19:57.728 lat (usec): min=79, max=215, avg=153.06, stdev=22.08 00:19:57.728 clat percentiles (usec): 00:19:57.728 | 1.00th=[ 78], 5.00th=[ 89], 10.00th=[ 127], 20.00th=[ 135], 00:19:57.728 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:19:57.728 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 169], 95.00th=[ 182], 00:19:57.728 | 99.00th=[ 196], 99.50th=[ 198], 99.90th=[ 202], 99.95th=[ 206], 00:19:57.728 | 99.99th=[ 206] 00:19:57.728 write: IOPS=3528, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1001msec); 0 zone resets 00:19:57.728 slat (nsec): min=10529, max=62697, avg=11622.92, stdev=1565.99 00:19:57.728 clat (usec): min=66, max=258, avg=134.14, stdev=20.70 00:19:57.728 lat (usec): min=78, max=269, avg=145.76, stdev=20.77 00:19:57.728 clat percentiles (usec): 00:19:57.728 | 1.00th=[ 75], 5.00th=[ 84], 10.00th=[ 118], 20.00th=[ 126], 00:19:57.728 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:19:57.728 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 169], 00:19:57.728 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 196], 99.95th=[ 223], 00:19:57.728 | 99.99th=[ 260] 00:19:57.728 bw ( KiB/s): min=14160, max=14160, per=21.35%, avg=14160.00, stdev= 0.00, samples=1 00:19:57.728 iops : min= 3540, max= 3540, avg=3540.00, stdev= 0.00, samples=1 00:19:57.728 lat (usec) : 100=7.15%, 250=92.84%, 500=0.02% 00:19:57.728 cpu : usr=5.20%, sys=9.10%, ctx=6605, majf=0, minf=1 00:19:57.728 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.728 issued rwts: total=3072,3532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.728 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:57.728 00:19:57.728 Run status group 0 (all jobs): 00:19:57.728 READ: bw=57.9MiB/s (60.8MB/s), 12.0MiB/s-22.0MiB/s (12.6MB/s-23.0MB/s), io=58.0MiB (60.8MB), run=1001-1001msec 00:19:57.729 WRITE: bw=64.8MiB/s (67.9MB/s), 13.7MiB/s-23.5MiB/s (14.4MB/s-24.6MB/s), io=64.8MiB (68.0MB), run=1001-1001msec 00:19:57.729 00:19:57.729 Disk stats (read/write): 00:19:57.729 nvme0n1: ios=4658/5027, merge=0/0, ticks=308/340, in_queue=648, util=83.87% 00:19:57.729 nvme0n2: ios=2560/2905, merge=0/0, ticks=345/365, in_queue=710, util=84.76% 00:19:57.729 nvme0n3: ios=2560/2913, merge=0/0, ticks=346/369, in_queue=715, util=88.30% 00:19:57.729 nvme0n4: ios=2560/2911, merge=0/0, ticks=337/348, in_queue=685, util=89.44% 00:19:57.729 07:00:18 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:57.729 [global] 00:19:57.729 thread=1 00:19:57.729 invalidate=1 00:19:57.729 rw=randwrite 00:19:57.729 time_based=1 00:19:57.729 runtime=1 00:19:57.729 ioengine=libaio 00:19:57.729 direct=1 00:19:57.729 bs=4096 00:19:57.729 iodepth=1 00:19:57.729 norandommap=0 00:19:57.729 numjobs=1 00:19:57.729 00:19:57.729 verify_dump=1 00:19:57.729 verify_backlog=512 00:19:57.729 verify_state_save=0 00:19:57.729 do_verify=1 00:19:57.729 verify=crc32c-intel 00:19:57.729 [job0] 00:19:57.729 filename=/dev/nvme0n1 00:19:57.729 [job1] 00:19:57.729 filename=/dev/nvme0n2 00:19:57.729 [job2] 00:19:57.729 filename=/dev/nvme0n3 00:19:57.729 [job3] 00:19:57.729 filename=/dev/nvme0n4 00:19:57.729 Could not set queue depth (nvme0n1) 00:19:57.729 Could not set queue depth (nvme0n2) 00:19:57.729 Could not set queue depth (nvme0n3) 00:19:57.729 Could not set queue depth (nvme0n4) 00:19:57.729 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:57.729 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:57.729 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:57.729 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:57.729 fio-3.35 00:19:57.729 Starting 4 threads 00:19:59.105 00:19:59.105 job0: (groupid=0, jobs=1): err= 0: pid=1382897: Sun Dec 15 07:00:20 2024 00:19:59.105 read: IOPS=3199, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec) 00:19:59.105 slat (nsec): min=8449, max=32567, avg=10601.46, stdev=3337.89 00:19:59.105 clat (usec): min=65, max=232, avg=137.79, stdev=22.99 00:19:59.105 lat (usec): min=74, max=241, avg=148.39, stdev=23.72 00:19:59.105 clat percentiles (usec): 00:19:59.105 | 1.00th=[ 74], 5.00th=[ 83], 10.00th=[ 120], 20.00th=[ 129], 00:19:59.105 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:19:59.105 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 165], 95.00th=[ 176], 00:19:59.105 | 99.00th=[ 194], 99.50th=[ 200], 99.90th=[ 212], 99.95th=[ 219], 00:19:59.105 | 99.99th=[ 233] 00:19:59.105 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:19:59.105 slat (nsec): min=10056, max=74806, avg=12785.20, stdev=3583.90 00:19:59.105 clat (usec): min=47, max=200, avg=128.42, stdev=18.98 00:19:59.105 lat (usec): min=72, max=221, avg=141.20, stdev=19.70 00:19:59.105 clat percentiles (usec): 00:19:59.105 | 1.00th=[ 72], 5.00th=[ 91], 10.00th=[ 109], 20.00th=[ 120], 00:19:59.105 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:19:59.105 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 159], 00:19:59.105 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 196], 99.95th=[ 200], 00:19:59.105 | 99.99th=[ 202] 00:19:59.105 bw ( KiB/s): min=15992, max=15992, per=25.01%, avg=15992.00, stdev= 0.00, samples=1 00:19:59.105 iops : min= 3998, max= 3998, avg=3998.00, stdev= 0.00, samples=1 00:19:59.105 lat (usec) : 50=0.01%, 100=7.46%, 250=92.53% 00:19:59.105 cpu : usr=5.10%, sys=9.60%, ctx=6787, majf=0, minf=1 00:19:59.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:59.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.105 issued rwts: total=3203,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:59.105 job1: (groupid=0, jobs=1): err= 0: pid=1382905: Sun Dec 15 07:00:20 2024 00:19:59.105 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:19:59.105 slat (nsec): min=3934, max=35354, avg=8824.49, stdev=1565.20 00:19:59.105 clat (usec): min=52, max=237, avg=95.29, stdev=29.57 00:19:59.105 lat (usec): min=65, max=245, avg=104.11, stdev=29.66 00:19:59.105 clat percentiles (usec): 00:19:59.106 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 71], 20.00th=[ 74], 00:19:59.106 | 30.00th=[ 75], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 82], 00:19:59.106 | 70.00th=[ 125], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 145], 00:19:59.106 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 184], 99.95th=[ 208], 00:19:59.106 | 99.99th=[ 237] 00:19:59.106 write: IOPS=4732, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1001msec); 0 zone resets 00:19:59.106 slat (nsec): min=4786, max=59497, avg=10973.97, stdev=1734.84 00:19:59.106 clat (usec): min=54, max=192, avg=93.91, stdev=27.42 00:19:59.106 lat (usec): min=64, max=203, avg=104.88, stdev=27.67 00:19:59.106 clat percentiles (usec): 00:19:59.106 | 1.00th=[ 64], 5.00th=[ 67], 10.00th=[ 69], 20.00th=[ 71], 00:19:59.106 | 30.00th=[ 73], 40.00th=[ 75], 50.00th=[ 78], 60.00th=[ 91], 00:19:59.106 | 70.00th=[ 122], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 137], 00:19:59.106 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 167], 00:19:59.106 | 99.99th=[ 194] 00:19:59.106 bw ( KiB/s): min=16192, max=16192, per=25.32%, avg=16192.00, stdev= 0.00, samples=1 00:19:59.106 iops : min= 4048, max= 4048, avg=4048.00, stdev= 0.00, samples=1 00:19:59.106 lat (usec) : 100=64.75%, 250=35.25% 00:19:59.106 cpu : usr=7.40%, sys=12.10%, ctx=9345, majf=0, minf=1 00:19:59.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:59.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.106 issued rwts: total=4608,4737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:59.106 job2: (groupid=0, jobs=1): err= 0: pid=1382924: Sun Dec 15 07:00:20 2024 00:19:59.106 read: IOPS=3688, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1000msec) 00:19:59.106 slat (nsec): min=8718, max=35800, avg=10974.03, stdev=3898.04 00:19:59.106 clat (usec): min=65, max=212, avg=118.61, stdev=33.21 00:19:59.106 lat (usec): min=78, max=232, avg=129.59, stdev=35.18 00:19:59.106 clat percentiles (usec): 00:19:59.106 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 84], 00:19:59.106 | 30.00th=[ 88], 40.00th=[ 94], 50.00th=[ 129], 60.00th=[ 135], 00:19:59.106 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 163], 95.00th=[ 174], 00:19:59.106 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 204], 99.95th=[ 208], 00:19:59.106 | 99.99th=[ 212] 00:19:59.106 write: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1000msec); 0 zone resets 00:19:59.106 slat (nsec): min=10562, max=61411, avg=12729.49, stdev=3356.28 00:19:59.106 clat (usec): min=56, max=202, avg=109.32, stdev=28.47 00:19:59.106 lat (usec): min=67, max=225, avg=122.05, stdev=29.92 00:19:59.106 clat percentiles (usec): 00:19:59.106 | 1.00th=[ 72], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 81], 00:19:59.106 | 30.00th=[ 84], 40.00th=[ 89], 50.00th=[ 113], 60.00th=[ 126], 00:19:59.106 | 70.00th=[ 130], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 155], 00:19:59.106 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 194], 00:19:59.106 | 99.99th=[ 202] 00:19:59.106 bw ( KiB/s): min=20480, max=20480, per=32.03%, avg=20480.00, stdev= 0.00, samples=1 00:19:59.106 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:59.106 lat (usec) : 100=46.13%, 250=53.87% 00:19:59.106 cpu : usr=6.60%, sys=10.10%, ctx=7785, majf=0, minf=1 00:19:59.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:59.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.106 issued rwts: total=3688,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:59.106 job3: (groupid=0, jobs=1): err= 0: pid=1382930: Sun Dec 15 07:00:20 2024 00:19:59.106 read: IOPS=3262, BW=12.7MiB/s (13.4MB/s)(12.8MiB/1001msec) 00:19:59.106 slat (nsec): min=8657, max=28662, avg=10324.64, stdev=2421.66 00:19:59.106 clat (usec): min=70, max=236, avg=137.40, stdev=20.49 00:19:59.106 lat (usec): min=80, max=246, avg=147.72, stdev=20.68 00:19:59.106 clat percentiles (usec): 00:19:59.106 | 1.00th=[ 77], 5.00th=[ 89], 10.00th=[ 122], 20.00th=[ 128], 00:19:59.106 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:19:59.106 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 172], 00:19:59.106 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 202], 99.95th=[ 215], 00:19:59.106 | 99.99th=[ 237] 00:19:59.106 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:19:59.106 slat (nsec): min=10169, max=65187, avg=12583.08, stdev=2801.45 00:19:59.106 clat (usec): min=68, max=220, avg=126.80, stdev=20.14 00:19:59.106 lat (usec): min=79, max=232, avg=139.38, stdev=20.34 00:19:59.106 clat percentiles (usec): 00:19:59.106 | 1.00th=[ 73], 5.00th=[ 81], 10.00th=[ 91], 20.00th=[ 119], 00:19:59.106 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:19:59.106 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 157], 00:19:59.106 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 190], 99.95th=[ 215], 00:19:59.106 | 99.99th=[ 221] 00:19:59.106 bw ( KiB/s): min=15504, max=15504, per=24.25%, avg=15504.00, stdev= 0.00, samples=1 00:19:59.106 iops : min= 3876, max= 3876, avg=3876.00, stdev= 0.00, samples=1 00:19:59.106 lat (usec) : 100=9.05%, 250=90.95% 00:19:59.106 cpu : usr=5.10%, sys=9.10%, ctx=6850, majf=0, minf=1 00:19:59.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:59.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.106 issued rwts: total=3266,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:59.106 00:19:59.106 Run status group 0 (all jobs): 00:19:59.106 READ: bw=57.6MiB/s (60.4MB/s), 12.5MiB/s-18.0MiB/s (13.1MB/s-18.9MB/s), io=57.7MiB (60.5MB), run=1000-1001msec 00:19:59.106 WRITE: bw=62.4MiB/s (65.5MB/s), 14.0MiB/s-18.5MiB/s (14.7MB/s-19.4MB/s), io=62.5MiB (65.5MB), run=1000-1001msec 00:19:59.106 00:19:59.106 Disk stats (read/write): 00:19:59.106 nvme0n1: ios=2656/3072, merge=0/0, ticks=357/360, in_queue=717, util=84.17% 00:19:59.106 nvme0n2: ios=3584/3852, merge=0/0, ticks=333/348, in_queue=681, util=85.19% 00:19:59.106 nvme0n3: ios=3173/3584, merge=0/0, ticks=332/349, in_queue=681, util=88.45% 00:19:59.106 nvme0n4: ios=2658/3072, merge=0/0, ticks=351/369, in_queue=720, util=89.39% 00:19:59.106 07:00:20 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:59.106 [global] 00:19:59.106 thread=1 00:19:59.106 invalidate=1 00:19:59.106 rw=write 00:19:59.106 time_based=1 00:19:59.106 runtime=1 00:19:59.106 ioengine=libaio 00:19:59.106 direct=1 00:19:59.106 bs=4096 00:19:59.106 iodepth=128 00:19:59.106 norandommap=0 00:19:59.106 numjobs=1 00:19:59.106 00:19:59.106 verify_dump=1 00:19:59.106 verify_backlog=512 00:19:59.106 verify_state_save=0 00:19:59.106 do_verify=1 00:19:59.106 verify=crc32c-intel 00:19:59.106 [job0] 00:19:59.106 filename=/dev/nvme0n1 00:19:59.106 [job1] 00:19:59.106 filename=/dev/nvme0n2 00:19:59.106 [job2] 00:19:59.106 filename=/dev/nvme0n3 00:19:59.106 [job3] 00:19:59.106 filename=/dev/nvme0n4 00:19:59.106 Could not set queue depth (nvme0n1) 00:19:59.106 Could not set queue depth (nvme0n2) 00:19:59.106 Could not set queue depth (nvme0n3) 00:19:59.106 Could not set queue depth (nvme0n4) 00:19:59.365 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:59.365 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:59.365 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:59.365 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:59.365 fio-3.35 00:19:59.365 Starting 4 threads 00:20:00.742 00:20:00.742 job0: (groupid=0, jobs=1): err= 0: pid=1383327: Sun Dec 15 07:00:22 2024 00:20:00.742 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:20:00.742 slat (usec): min=2, max=1655, avg=98.23, stdev=257.29 00:20:00.742 clat (usec): min=10174, max=16395, avg=12781.13, stdev=429.80 00:20:00.742 lat (usec): min=10181, max=16404, avg=12879.36, stdev=398.92 00:20:00.742 clat percentiles (usec): 00:20:00.742 | 1.00th=[11731], 5.00th=[11994], 10.00th=[12256], 20.00th=[12518], 00:20:00.742 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12780], 60.00th=[12911], 00:20:00.742 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13173], 00:20:00.742 | 99.00th=[13698], 99.50th=[14877], 99.90th=[16319], 99.95th=[16319], 00:20:00.742 | 99.99th=[16450] 00:20:00.742 write: IOPS=5152, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1005msec); 0 zone resets 00:20:00.742 slat (usec): min=2, max=1951, avg=92.11, stdev=240.57 00:20:00.742 clat (usec): min=4584, max=13405, avg=11939.54, stdev=694.86 00:20:00.742 lat (usec): min=4594, max=13776, avg=12031.66, stdev=680.96 00:20:00.742 clat percentiles (usec): 00:20:00.742 | 1.00th=[ 7832], 5.00th=[11207], 10.00th=[11469], 20.00th=[11731], 00:20:00.742 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:20:00.742 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:20:00.742 | 99.00th=[12780], 99.50th=[12780], 99.90th=[13173], 99.95th=[13304], 00:20:00.742 | 99.99th=[13435] 00:20:00.742 bw ( KiB/s): min=20480, max=20480, per=19.23%, avg=20480.00, stdev= 0.00, samples=2 00:20:00.742 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:20:00.742 lat (msec) : 10=0.78%, 20=99.22% 00:20:00.742 cpu : usr=2.69%, sys=4.98%, ctx=1364, majf=0, minf=1 00:20:00.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:00.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:00.742 issued rwts: total=5120,5178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:00.742 job1: (groupid=0, jobs=1): err= 0: pid=1383336: Sun Dec 15 07:00:22 2024 00:20:00.742 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:20:00.742 slat (usec): min=2, max=1505, avg=98.22, stdev=249.74 00:20:00.742 clat (usec): min=11340, max=16378, avg=12764.52, stdev=380.23 00:20:00.742 lat (usec): min=11535, max=16397, avg=12862.74, stdev=347.25 00:20:00.742 clat percentiles (usec): 00:20:00.742 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12256], 20.00th=[12518], 00:20:00.742 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12780], 60.00th=[12911], 00:20:00.742 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13173], 00:20:00.742 | 99.00th=[13698], 99.50th=[14091], 99.90th=[15533], 99.95th=[15664], 00:20:00.742 | 99.99th=[16319] 00:20:00.742 write: IOPS=5170, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1005msec); 0 zone resets 00:20:00.742 slat (usec): min=2, max=1945, avg=91.96, stdev=238.82 00:20:00.742 clat (usec): min=3992, max=13507, avg=11912.89, stdev=748.93 00:20:00.742 lat (usec): min=4635, max=13513, avg=12004.85, stdev=733.54 00:20:00.742 clat percentiles (usec): 00:20:00.742 | 1.00th=[ 7767], 5.00th=[11207], 10.00th=[11338], 20.00th=[11731], 00:20:00.742 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:20:00.742 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:20:00.742 | 99.00th=[12780], 99.50th=[12780], 99.90th=[13042], 99.95th=[13173], 00:20:00.742 | 99.99th=[13566] 00:20:00.742 bw ( KiB/s): min=20480, max=20480, per=19.23%, avg=20480.00, stdev= 0.00, samples=2 00:20:00.742 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:20:00.742 lat (msec) : 4=0.01%, 10=0.91%, 20=99.08% 00:20:00.742 cpu : usr=3.09%, sys=4.58%, ctx=1409, majf=0, minf=1 00:20:00.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:00.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:00.742 issued rwts: total=5120,5196,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:00.742 job2: (groupid=0, jobs=1): err= 0: pid=1383356: Sun Dec 15 07:00:22 2024 00:20:00.742 read: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec) 00:20:00.742 slat (usec): min=2, max=1231, avg=60.07, stdev=218.74 00:20:00.742 clat (usec): min=6611, max=13082, avg=7996.76, stdev=470.24 00:20:00.742 lat (usec): min=6663, max=13091, avg=8056.83, stdev=463.32 00:20:00.742 clat percentiles (usec): 00:20:00.742 | 1.00th=[ 6849], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 7767], 00:20:00.742 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8029], 60.00th=[ 8094], 00:20:00.742 | 70.00th=[ 8160], 80.00th=[ 8225], 90.00th=[ 8356], 95.00th=[ 8455], 00:20:00.742 | 99.00th=[ 9110], 99.50th=[10814], 99.90th=[13042], 99.95th=[13042], 00:20:00.742 | 99.99th=[13042] 00:20:00.742 write: IOPS=8166, BW=31.9MiB/s (33.4MB/s)(32.1MiB/1006msec); 0 zone resets 00:20:00.742 slat (usec): min=2, max=1513, avg=58.12, stdev=210.76 00:20:00.742 clat (usec): min=1364, max=9088, avg=7563.68, stdev=535.77 00:20:00.742 lat (usec): min=1376, max=9093, avg=7621.80, stdev=534.74 00:20:00.742 clat percentiles (usec): 00:20:00.742 | 1.00th=[ 5538], 5.00th=[ 6718], 10.00th=[ 7046], 20.00th=[ 7439], 00:20:00.742 | 30.00th=[ 7504], 40.00th=[ 7570], 50.00th=[ 7635], 60.00th=[ 7701], 00:20:00.742 | 70.00th=[ 7767], 80.00th=[ 7832], 90.00th=[ 7963], 95.00th=[ 8094], 00:20:00.742 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 8848], 99.95th=[ 8848], 00:20:00.742 | 99.99th=[ 9110] 00:20:00.742 bw ( KiB/s): min=32768, max=32768, per=30.77%, avg=32768.00, stdev= 0.00, samples=2 00:20:00.742 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:20:00.742 lat (msec) : 2=0.02%, 4=0.13%, 10=99.54%, 20=0.30% 00:20:00.742 cpu : usr=4.68%, sys=6.37%, ctx=1017, majf=0, minf=2 00:20:00.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:00.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:00.742 issued rwts: total=8192,8215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:00.742 job3: (groupid=0, jobs=1): err= 0: pid=1383362: Sun Dec 15 07:00:22 2024 00:20:00.742 read: IOPS=8123, BW=31.7MiB/s (33.3MB/s)(31.8MiB/1002msec) 00:20:00.742 slat (usec): min=2, max=1528, avg=60.57, stdev=230.87 00:20:00.742 clat (usec): min=628, max=9865, avg=7917.38, stdev=503.39 00:20:00.742 lat (usec): min=1982, max=9883, avg=7977.95, stdev=449.69 00:20:00.742 clat percentiles (usec): 00:20:00.742 | 1.00th=[ 6390], 5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[ 7832], 00:20:00.742 | 30.00th=[ 7898], 40.00th=[ 7963], 50.00th=[ 7963], 60.00th=[ 8029], 00:20:00.742 | 70.00th=[ 8094], 80.00th=[ 8160], 90.00th=[ 8225], 95.00th=[ 8356], 00:20:00.742 | 99.00th=[ 8586], 99.50th=[ 8586], 99.90th=[ 8848], 99.95th=[ 9634], 00:20:00.742 | 99.99th=[ 9896] 00:20:00.742 write: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec); 0 zone resets 00:20:00.742 slat (usec): min=2, max=1444, avg=58.08, stdev=218.81 00:20:00.742 clat (usec): min=5589, max=9143, avg=7619.45, stdev=305.28 00:20:00.742 lat (usec): min=5599, max=9156, avg=7677.53, stdev=215.15 00:20:00.742 clat percentiles (usec): 00:20:00.742 | 1.00th=[ 6390], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 7504], 00:20:00.742 | 30.00th=[ 7504], 40.00th=[ 7570], 50.00th=[ 7635], 60.00th=[ 7701], 00:20:00.742 | 70.00th=[ 7767], 80.00th=[ 7832], 90.00th=[ 7898], 95.00th=[ 7963], 00:20:00.742 | 99.00th=[ 8160], 99.50th=[ 8291], 99.90th=[ 8455], 99.95th=[ 8455], 00:20:00.742 | 99.99th=[ 9110] 00:20:00.742 bw ( KiB/s): min=32768, max=32768, per=30.77%, avg=32768.00, stdev= 0.00, samples=2 00:20:00.742 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:20:00.742 lat (usec) : 750=0.01% 00:20:00.742 lat (msec) : 2=0.02%, 4=0.24%, 10=99.73% 00:20:00.742 cpu : usr=3.80%, sys=7.29%, ctx=1024, majf=0, minf=1 00:20:00.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:00.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:00.742 issued rwts: total=8140,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:00.742 00:20:00.742 Run status group 0 (all jobs): 00:20:00.742 READ: bw=103MiB/s (108MB/s), 19.9MiB/s-31.8MiB/s (20.9MB/s-33.4MB/s), io=104MiB (109MB), run=1002-1006msec 00:20:00.742 WRITE: bw=104MiB/s (109MB/s), 20.1MiB/s-31.9MiB/s (21.1MB/s-33.5MB/s), io=105MiB (110MB), run=1002-1006msec 00:20:00.742 00:20:00.742 Disk stats (read/write): 00:20:00.742 nvme0n1: ios=4145/4464, merge=0/0, ticks=25794/26255, in_queue=52049, util=84.35% 00:20:00.742 nvme0n2: ios=4096/4471, merge=0/0, ticks=25780/26255, in_queue=52035, util=85.31% 00:20:00.742 nvme0n3: ios=6656/6936, merge=0/0, ticks=52481/51622, in_queue=104103, util=88.47% 00:20:00.742 nvme0n4: ios=6656/6945, merge=0/0, ticks=16979/17060, in_queue=34039, util=89.50% 00:20:00.742 07:00:22 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:00.742 [global] 00:20:00.742 thread=1 00:20:00.742 invalidate=1 00:20:00.742 rw=randwrite 00:20:00.742 time_based=1 00:20:00.742 runtime=1 00:20:00.742 ioengine=libaio 00:20:00.742 direct=1 00:20:00.742 bs=4096 00:20:00.743 iodepth=128 00:20:00.743 norandommap=0 00:20:00.743 numjobs=1 00:20:00.743 00:20:00.743 verify_dump=1 00:20:00.743 verify_backlog=512 00:20:00.743 verify_state_save=0 00:20:00.743 do_verify=1 00:20:00.743 verify=crc32c-intel 00:20:00.743 [job0] 00:20:00.743 filename=/dev/nvme0n1 00:20:00.743 [job1] 00:20:00.743 filename=/dev/nvme0n2 00:20:00.743 [job2] 00:20:00.743 filename=/dev/nvme0n3 00:20:00.743 [job3] 00:20:00.743 filename=/dev/nvme0n4 00:20:00.743 Could not set queue depth (nvme0n1) 00:20:00.743 Could not set queue depth (nvme0n2) 00:20:00.743 Could not set queue depth (nvme0n3) 00:20:00.743 Could not set queue depth (nvme0n4) 00:20:01.001 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:01.001 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:01.001 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:01.001 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:01.001 fio-3.35 00:20:01.001 Starting 4 threads 00:20:02.470 00:20:02.470 job0: (groupid=0, jobs=1): err= 0: pid=1383762: Sun Dec 15 07:00:23 2024 00:20:02.470 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:20:02.470 slat (usec): min=2, max=1559, avg=80.45, stdev=262.67 00:20:02.470 clat (usec): min=8310, max=11976, avg=10410.91, stdev=413.13 00:20:02.470 lat (usec): min=9502, max=11986, avg=10491.36, stdev=322.53 00:20:02.470 clat percentiles (usec): 00:20:02.470 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10159], 00:20:02.470 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:20:02.470 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10814], 95.00th=[10945], 00:20:02.470 | 99.00th=[11600], 99.50th=[11731], 99.90th=[11863], 99.95th=[11994], 00:20:02.470 | 99.99th=[11994] 00:20:02.470 write: IOPS=6466, BW=25.3MiB/s (26.5MB/s)(25.3MiB/1002msec); 0 zone resets 00:20:02.470 slat (usec): min=2, max=1441, avg=74.78, stdev=244.44 00:20:02.470 clat (usec): min=1668, max=11443, avg=9706.44, stdev=633.84 00:20:02.470 lat (usec): min=2980, max=11453, avg=9781.23, stdev=583.83 00:20:02.470 clat percentiles (usec): 00:20:02.470 | 1.00th=[ 6652], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9503], 00:20:02.470 | 30.00th=[ 9634], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:20:02.470 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[10159], 95.00th=[10290], 00:20:02.470 | 99.00th=[10945], 99.50th=[11207], 99.90th=[11338], 99.95th=[11469], 00:20:02.470 | 99.99th=[11469] 00:20:02.470 bw ( KiB/s): min=24696, max=26120, per=27.38%, avg=25408.00, stdev=1006.92, samples=2 00:20:02.470 iops : min= 6174, max= 6530, avg=6352.00, stdev=251.73, samples=2 00:20:02.470 lat (msec) : 2=0.01%, 4=0.08%, 10=46.00%, 20=53.91% 00:20:02.470 cpu : usr=3.40%, sys=4.70%, ctx=3254, majf=0, minf=1 00:20:02.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:20:02.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:02.470 issued rwts: total=6144,6479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:02.470 job1: (groupid=0, jobs=1): err= 0: pid=1383774: Sun Dec 15 07:00:23 2024 00:20:02.470 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:20:02.470 slat (usec): min=2, max=1296, avg=80.30, stdev=244.33 00:20:02.470 clat (usec): min=8725, max=11896, avg=10417.45, stdev=397.06 00:20:02.470 lat (usec): min=9481, max=11904, avg=10497.75, stdev=317.48 00:20:02.470 clat percentiles (usec): 00:20:02.470 | 1.00th=[ 9241], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:20:02.470 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:20:02.470 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10814], 95.00th=[10945], 00:20:02.470 | 99.00th=[11600], 99.50th=[11731], 99.90th=[11863], 99.95th=[11863], 00:20:02.470 | 99.99th=[11863] 00:20:02.470 write: IOPS=6490, BW=25.4MiB/s (26.6MB/s)(25.4MiB/1002msec); 0 zone resets 00:20:02.470 slat (usec): min=2, max=1262, avg=74.70, stdev=226.78 00:20:02.470 clat (usec): min=1088, max=11195, avg=9672.50, stdev=738.69 00:20:02.470 lat (usec): min=1944, max=11206, avg=9747.20, stdev=703.89 00:20:02.470 clat percentiles (usec): 00:20:02.470 | 1.00th=[ 6521], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9503], 00:20:02.470 | 30.00th=[ 9634], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:20:02.470 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[10159], 95.00th=[10290], 00:20:02.470 | 99.00th=[10552], 99.50th=[10814], 99.90th=[11207], 99.95th=[11207], 00:20:02.470 | 99.99th=[11207] 00:20:02.470 bw ( KiB/s): min=24888, max=26120, per=27.48%, avg=25504.00, stdev=871.16, samples=2 00:20:02.470 iops : min= 6222, max= 6530, avg=6376.00, stdev=217.79, samples=2 00:20:02.470 lat (msec) : 2=0.05%, 4=0.21%, 10=47.36%, 20=52.38% 00:20:02.470 cpu : usr=2.40%, sys=5.89%, ctx=3467, majf=0, minf=1 00:20:02.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:20:02.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:02.470 issued rwts: total=6144,6503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:02.470 job2: (groupid=0, jobs=1): err= 0: pid=1383796: Sun Dec 15 07:00:23 2024 00:20:02.470 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:20:02.470 slat (usec): min=2, max=2183, avg=108.15, stdev=322.13 00:20:02.470 clat (usec): min=6291, max=20233, avg=13931.28, stdev=2397.82 00:20:02.470 lat (usec): min=6299, max=20243, avg=14039.43, stdev=2399.94 00:20:02.470 clat percentiles (usec): 00:20:02.470 | 1.00th=[10945], 5.00th=[11863], 10.00th=[12387], 20.00th=[12649], 00:20:02.470 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:20:02.470 | 70.00th=[13173], 80.00th=[17171], 90.00th=[18482], 95.00th=[18744], 00:20:02.470 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19792], 99.95th=[19792], 00:20:02.470 | 99.99th=[20317] 00:20:02.470 write: IOPS=4641, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1003msec); 0 zone resets 00:20:02.470 slat (usec): min=2, max=2006, avg=104.26, stdev=311.89 00:20:02.470 clat (usec): min=2155, max=20007, avg=13401.57, stdev=2661.96 00:20:02.470 lat (usec): min=2165, max=20012, avg=13505.83, stdev=2665.88 00:20:02.470 clat percentiles (usec): 00:20:02.470 | 1.00th=[ 6259], 5.00th=[11338], 10.00th=[11863], 20.00th=[11994], 00:20:02.470 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:20:02.470 | 70.00th=[12649], 80.00th=[16909], 90.00th=[18220], 95.00th=[18482], 00:20:02.470 | 99.00th=[19268], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:20:02.470 | 99.99th=[20055] 00:20:02.470 bw ( KiB/s): min=16384, max=20480, per=19.86%, avg=18432.00, stdev=2896.31, samples=2 00:20:02.470 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:20:02.470 lat (msec) : 4=0.30%, 10=0.59%, 20=99.08%, 50=0.02% 00:20:02.470 cpu : usr=1.90%, sys=3.99%, ctx=2174, majf=0, minf=1 00:20:02.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:20:02.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:02.470 issued rwts: total=4608,4655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:02.470 job3: (groupid=0, jobs=1): err= 0: pid=1383805: Sun Dec 15 07:00:23 2024 00:20:02.470 read: IOPS=5509, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1003msec) 00:20:02.470 slat (usec): min=2, max=3870, avg=90.34, stdev=318.69 00:20:02.470 clat (usec): min=2112, max=14976, avg=11597.65, stdev=1658.89 00:20:02.470 lat (usec): min=2132, max=14985, avg=11687.99, stdev=1652.39 00:20:02.470 clat percentiles (usec): 00:20:02.470 | 1.00th=[ 7635], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9634], 00:20:02.470 | 30.00th=[ 9896], 40.00th=[12125], 50.00th=[12518], 60.00th=[12649], 00:20:02.470 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13173], 95.00th=[13304], 00:20:02.470 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14615], 99.95th=[15008], 00:20:02.470 | 99.99th=[15008] 00:20:02.470 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:20:02.470 slat (usec): min=2, max=1797, avg=85.53, stdev=299.08 00:20:02.470 clat (usec): min=6485, max=13550, avg=11166.60, stdev=1490.09 00:20:02.470 lat (usec): min=6494, max=13560, avg=11252.13, stdev=1487.21 00:20:02.470 clat percentiles (usec): 00:20:02.470 | 1.00th=[ 8356], 5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[ 9372], 00:20:02.470 | 30.00th=[ 9503], 40.00th=[11600], 50.00th=[11994], 60.00th=[12125], 00:20:02.470 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12649], 95.00th=[12911], 00:20:02.470 | 99.00th=[13173], 99.50th=[13173], 99.90th=[13435], 99.95th=[13566], 00:20:02.470 | 99.99th=[13566] 00:20:02.470 bw ( KiB/s): min=20480, max=24576, per=24.28%, avg=22528.00, stdev=2896.31, samples=2 00:20:02.470 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:20:02.470 lat (msec) : 4=0.05%, 10=33.30%, 20=66.64% 00:20:02.470 cpu : usr=3.09%, sys=3.89%, ctx=1774, majf=0, minf=1 00:20:02.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:02.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:02.470 issued rwts: total=5526,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:02.470 00:20:02.470 Run status group 0 (all jobs): 00:20:02.470 READ: bw=87.3MiB/s (91.6MB/s), 17.9MiB/s-24.0MiB/s (18.8MB/s-25.1MB/s), io=87.6MiB (91.8MB), run=1002-1003msec 00:20:02.470 WRITE: bw=90.6MiB/s (95.0MB/s), 18.1MiB/s-25.4MiB/s (19.0MB/s-26.6MB/s), io=90.9MiB (95.3MB), run=1002-1003msec 00:20:02.470 00:20:02.470 Disk stats (read/write): 00:20:02.470 nvme0n1: ios=5170/5355, merge=0/0, ticks=13342/12930, in_queue=26272, util=83.87% 00:20:02.470 nvme0n2: ios=5120/5352, merge=0/0, ticks=13332/12895, in_queue=26227, util=84.90% 00:20:02.470 nvme0n3: ios=3584/3923, merge=0/0, ticks=12883/13477, in_queue=26360, util=88.23% 00:20:02.470 nvme0n4: ios=4608/4795, merge=0/0, ticks=17530/17492, in_queue=35022, util=89.46% 00:20:02.470 07:00:23 -- target/fio.sh@55 -- # sync 00:20:02.470 07:00:23 -- target/fio.sh@59 -- # fio_pid=1384006 00:20:02.470 07:00:23 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:02.470 07:00:23 -- target/fio.sh@61 -- # sleep 3 00:20:02.470 [global] 00:20:02.470 thread=1 00:20:02.470 invalidate=1 00:20:02.470 rw=read 00:20:02.470 time_based=1 00:20:02.470 runtime=10 00:20:02.470 ioengine=libaio 00:20:02.470 direct=1 00:20:02.470 bs=4096 00:20:02.470 iodepth=1 00:20:02.470 norandommap=1 00:20:02.470 numjobs=1 00:20:02.470 00:20:02.470 [job0] 00:20:02.470 filename=/dev/nvme0n1 00:20:02.470 [job1] 00:20:02.470 filename=/dev/nvme0n2 00:20:02.470 [job2] 00:20:02.470 filename=/dev/nvme0n3 00:20:02.470 [job3] 00:20:02.470 filename=/dev/nvme0n4 00:20:02.470 Could not set queue depth (nvme0n1) 00:20:02.470 Could not set queue depth (nvme0n2) 00:20:02.470 Could not set queue depth (nvme0n3) 00:20:02.470 Could not set queue depth (nvme0n4) 00:20:02.729 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:02.729 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:02.729 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:02.729 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:02.729 fio-3.35 00:20:02.729 Starting 4 threads 00:20:05.265 07:00:26 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:05.523 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=104079360, buflen=4096 00:20:05.523 fio: pid=1384253, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:05.524 07:00:27 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:05.782 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=94507008, buflen=4096 00:20:05.782 fio: pid=1384244, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:05.782 07:00:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:05.782 07:00:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:05.782 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45240320, buflen=4096 00:20:05.782 fio: pid=1384196, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:06.041 07:00:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:06.041 07:00:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:06.041 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12582912, buflen=4096 00:20:06.041 fio: pid=1384216, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:06.041 07:00:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:06.041 07:00:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:06.300 00:20:06.300 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1384196: Sun Dec 15 07:00:27 2024 00:20:06.300 read: IOPS=9266, BW=36.2MiB/s (38.0MB/s)(107MiB/2960msec) 00:20:06.300 slat (usec): min=3, max=23296, avg= 8.71, stdev=195.51 00:20:06.300 clat (usec): min=47, max=313, avg=97.82, stdev=24.57 00:20:06.300 lat (usec): min=51, max=23382, avg=106.53, stdev=197.16 00:20:06.300 clat percentiles (usec): 00:20:06.300 | 1.00th=[ 58], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 77], 00:20:06.300 | 30.00th=[ 80], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 97], 00:20:06.300 | 70.00th=[ 120], 80.00th=[ 126], 90.00th=[ 131], 95.00th=[ 137], 00:20:06.300 | 99.00th=[ 159], 99.50th=[ 172], 99.90th=[ 184], 99.95th=[ 188], 00:20:06.300 | 99.99th=[ 227] 00:20:06.300 bw ( KiB/s): min=29616, max=45776, per=25.71%, avg=36214.40, stdev=6966.24, samples=5 00:20:06.300 iops : min= 7404, max=11444, avg=9053.60, stdev=1741.56, samples=5 00:20:06.300 lat (usec) : 50=0.04%, 100=60.70%, 250=39.25%, 500=0.01% 00:20:06.300 cpu : usr=2.37%, sys=7.13%, ctx=27437, majf=0, minf=2 00:20:06.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.300 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.300 issued rwts: total=27430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:06.300 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1384216: Sun Dec 15 07:00:27 2024 00:20:06.300 read: IOPS=11.3k, BW=44.1MiB/s (46.3MB/s)(140MiB/3173msec) 00:20:06.300 slat (usec): min=8, max=15943, avg=10.87, stdev=175.30 00:20:06.300 clat (usec): min=41, max=129, avg=75.79, stdev= 8.07 00:20:06.300 lat (usec): min=57, max=16035, avg=86.66, stdev=175.55 00:20:06.300 clat percentiles (usec): 00:20:06.300 | 1.00th=[ 53], 5.00th=[ 58], 10.00th=[ 68], 20.00th=[ 72], 00:20:06.300 | 30.00th=[ 74], 40.00th=[ 76], 50.00th=[ 77], 60.00th=[ 78], 00:20:06.300 | 70.00th=[ 80], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 88], 00:20:06.300 | 99.00th=[ 95], 99.50th=[ 98], 99.90th=[ 103], 99.95th=[ 106], 00:20:06.300 | 99.99th=[ 114] 00:20:06.300 bw ( KiB/s): min=42787, max=45664, per=31.90%, avg=44939.17, stdev=1081.76, samples=6 00:20:06.300 iops : min=10696, max=11416, avg=11234.67, stdev=270.74, samples=6 00:20:06.300 lat (usec) : 50=0.04%, 100=99.72%, 250=0.24% 00:20:06.300 cpu : usr=5.08%, sys=16.05%, ctx=35848, majf=0, minf=1 00:20:06.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.300 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.300 issued rwts: total=35841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:06.300 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1384244: Sun Dec 15 07:00:27 2024 00:20:06.300 read: IOPS=8276, BW=32.3MiB/s (33.9MB/s)(90.1MiB/2788msec) 00:20:06.300 slat (usec): min=8, max=15818, avg=10.17, stdev=130.21 00:20:06.300 clat (usec): min=70, max=263, avg=108.07, stdev=21.08 00:20:06.300 lat (usec): min=78, max=15923, avg=118.25, stdev=131.87 00:20:06.300 clat percentiles (usec): 00:20:06.300 | 1.00th=[ 78], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 87], 00:20:06.300 | 30.00th=[ 90], 40.00th=[ 95], 50.00th=[ 106], 60.00th=[ 120], 00:20:06.300 | 70.00th=[ 125], 80.00th=[ 129], 90.00th=[ 135], 95.00th=[ 139], 00:20:06.300 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 190], 00:20:06.300 | 99.99th=[ 223] 00:20:06.300 bw ( KiB/s): min=29624, max=39968, per=23.76%, avg=33472.00, stdev=4583.18, samples=5 00:20:06.300 iops : min= 7406, max= 9992, avg=8368.00, stdev=1145.79, samples=5 00:20:06.300 lat (usec) : 100=46.10%, 250=53.88%, 500=0.01% 00:20:06.300 cpu : usr=4.16%, sys=11.63%, ctx=23077, majf=0, minf=2 00:20:06.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.300 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.300 issued rwts: total=23074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:06.300 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1384253: Sun Dec 15 07:00:27 2024 00:20:06.300 read: IOPS=9826, BW=38.4MiB/s (40.2MB/s)(99.3MiB/2586msec) 00:20:06.300 slat (nsec): min=8333, max=37623, avg=8923.50, stdev=904.08 00:20:06.300 clat (usec): min=69, max=197, avg=90.96, stdev=11.74 00:20:06.300 lat (usec): min=80, max=205, avg=99.88, stdev=11.83 00:20:06.300 clat percentiles (usec): 00:20:06.300 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:20:06.300 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 91], 00:20:06.300 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 101], 95.00th=[ 110], 00:20:06.300 | 99.00th=[ 141], 99.50th=[ 149], 99.90th=[ 178], 99.95th=[ 182], 00:20:06.300 | 99.99th=[ 194] 00:20:06.300 bw ( KiB/s): min=38696, max=41048, per=28.29%, avg=39851.20, stdev=1076.83, samples=5 00:20:06.300 iops : min= 9674, max=10262, avg=9962.80, stdev=269.21, samples=5 00:20:06.300 lat (usec) : 100=88.83%, 250=11.17% 00:20:06.300 cpu : usr=5.18%, sys=13.46%, ctx=25411, majf=0, minf=2 00:20:06.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:06.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.300 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.300 issued rwts: total=25411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:06.301 00:20:06.301 Run status group 0 (all jobs): 00:20:06.301 READ: bw=138MiB/s (144MB/s), 32.3MiB/s-44.1MiB/s (33.9MB/s-46.3MB/s), io=437MiB (458MB), run=2586-3173msec 00:20:06.301 00:20:06.301 Disk stats (read/write): 00:20:06.301 nvme0n1: ios=25544/0, merge=0/0, ticks=2461/0, in_queue=2461, util=91.95% 00:20:06.301 nvme0n2: ios=34249/0, merge=0/0, ticks=2314/0, in_queue=2314, util=92.58% 00:20:06.301 nvme0n3: ios=21271/0, merge=0/0, ticks=2183/0, in_queue=2183, util=95.80% 00:20:06.301 nvme0n4: ios=25260/0, merge=0/0, ticks=2011/0, in_queue=2011, util=96.42% 00:20:06.301 07:00:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:06.301 07:00:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:06.559 07:00:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:06.559 07:00:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:06.818 07:00:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:06.818 07:00:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:07.077 07:00:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:07.077 07:00:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:07.077 07:00:28 -- target/fio.sh@69 -- # fio_status=0 00:20:07.077 07:00:28 -- target/fio.sh@70 -- # wait 1384006 00:20:07.077 07:00:28 -- target/fio.sh@70 -- # fio_status=4 00:20:07.077 07:00:28 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:08.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:08.012 07:00:29 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:08.012 07:00:29 -- common/autotest_common.sh@1208 -- # local i=0 00:20:08.012 07:00:29 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:08.012 07:00:29 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:08.012 07:00:29 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:08.012 07:00:29 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:08.012 07:00:29 -- common/autotest_common.sh@1220 -- # return 0 00:20:08.012 07:00:29 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:08.012 07:00:29 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:08.012 nvmf hotplug test: fio failed as expected 00:20:08.012 07:00:29 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:08.270 07:00:29 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:08.271 07:00:29 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:08.271 07:00:29 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:08.271 07:00:29 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:08.271 07:00:29 -- target/fio.sh@91 -- # nvmftestfini 00:20:08.271 07:00:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:08.271 07:00:29 -- nvmf/common.sh@116 -- # sync 00:20:08.271 07:00:29 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:08.271 07:00:29 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:08.271 07:00:29 -- nvmf/common.sh@119 -- # set +e 00:20:08.271 07:00:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:08.271 07:00:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:08.271 rmmod nvme_rdma 00:20:08.271 rmmod nvme_fabrics 00:20:08.271 07:00:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:08.271 07:00:29 -- nvmf/common.sh@123 -- # set -e 00:20:08.271 07:00:29 -- nvmf/common.sh@124 -- # return 0 00:20:08.271 07:00:29 -- nvmf/common.sh@477 -- # '[' -n 1381059 ']' 00:20:08.271 07:00:29 -- nvmf/common.sh@478 -- # killprocess 1381059 00:20:08.271 07:00:29 -- common/autotest_common.sh@936 -- # '[' -z 1381059 ']' 00:20:08.271 07:00:29 -- common/autotest_common.sh@940 -- # kill -0 1381059 00:20:08.271 07:00:29 -- common/autotest_common.sh@941 -- # uname 00:20:08.271 07:00:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.271 07:00:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1381059 00:20:08.530 07:00:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:08.530 07:00:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:08.530 07:00:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1381059' 00:20:08.530 killing process with pid 1381059 00:20:08.530 07:00:29 -- common/autotest_common.sh@955 -- # kill 1381059 00:20:08.530 07:00:29 -- common/autotest_common.sh@960 -- # wait 1381059 00:20:08.789 07:00:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:08.789 07:00:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:08.789 00:20:08.789 real 0m26.503s 00:20:08.789 user 2m9.282s 00:20:08.789 sys 0m10.015s 00:20:08.789 07:00:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:08.789 07:00:30 -- common/autotest_common.sh@10 -- # set +x 00:20:08.789 ************************************ 00:20:08.789 END TEST nvmf_fio_target 00:20:08.789 ************************************ 00:20:08.789 07:00:30 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:08.789 07:00:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:08.789 07:00:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:08.789 07:00:30 -- common/autotest_common.sh@10 -- # set +x 00:20:08.789 ************************************ 00:20:08.789 START TEST nvmf_bdevio 00:20:08.789 ************************************ 00:20:08.789 07:00:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:08.789 * Looking for test storage... 00:20:08.789 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:08.789 07:00:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:08.789 07:00:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:08.789 07:00:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:08.789 07:00:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:08.789 07:00:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:08.789 07:00:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:08.789 07:00:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:08.789 07:00:30 -- scripts/common.sh@335 -- # IFS=.-: 00:20:08.789 07:00:30 -- scripts/common.sh@335 -- # read -ra ver1 00:20:08.789 07:00:30 -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.789 07:00:30 -- scripts/common.sh@336 -- # read -ra ver2 00:20:08.789 07:00:30 -- scripts/common.sh@337 -- # local 'op=<' 00:20:08.789 07:00:30 -- scripts/common.sh@339 -- # ver1_l=2 00:20:08.789 07:00:30 -- scripts/common.sh@340 -- # ver2_l=1 00:20:08.789 07:00:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:08.789 07:00:30 -- scripts/common.sh@343 -- # case "$op" in 00:20:08.789 07:00:30 -- scripts/common.sh@344 -- # : 1 00:20:08.789 07:00:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:08.789 07:00:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.789 07:00:30 -- scripts/common.sh@364 -- # decimal 1 00:20:08.789 07:00:30 -- scripts/common.sh@352 -- # local d=1 00:20:08.789 07:00:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.789 07:00:30 -- scripts/common.sh@354 -- # echo 1 00:20:08.789 07:00:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:08.789 07:00:30 -- scripts/common.sh@365 -- # decimal 2 00:20:08.789 07:00:30 -- scripts/common.sh@352 -- # local d=2 00:20:08.789 07:00:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.789 07:00:30 -- scripts/common.sh@354 -- # echo 2 00:20:08.789 07:00:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:08.789 07:00:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:08.789 07:00:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:08.789 07:00:30 -- scripts/common.sh@367 -- # return 0 00:20:08.789 07:00:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.789 07:00:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:08.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.789 --rc genhtml_branch_coverage=1 00:20:08.789 --rc genhtml_function_coverage=1 00:20:08.789 --rc genhtml_legend=1 00:20:08.790 --rc geninfo_all_blocks=1 00:20:08.790 --rc geninfo_unexecuted_blocks=1 00:20:08.790 00:20:08.790 ' 00:20:08.790 07:00:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:08.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.790 --rc genhtml_branch_coverage=1 00:20:08.790 --rc genhtml_function_coverage=1 00:20:08.790 --rc genhtml_legend=1 00:20:08.790 --rc geninfo_all_blocks=1 00:20:08.790 --rc geninfo_unexecuted_blocks=1 00:20:08.790 00:20:08.790 ' 00:20:08.790 07:00:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:08.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.790 --rc genhtml_branch_coverage=1 00:20:08.790 --rc genhtml_function_coverage=1 00:20:08.790 --rc genhtml_legend=1 00:20:08.790 --rc geninfo_all_blocks=1 00:20:08.790 --rc geninfo_unexecuted_blocks=1 00:20:08.790 00:20:08.790 ' 00:20:08.790 07:00:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:08.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.790 --rc genhtml_branch_coverage=1 00:20:08.790 --rc genhtml_function_coverage=1 00:20:08.790 --rc genhtml_legend=1 00:20:08.790 --rc geninfo_all_blocks=1 00:20:08.790 --rc geninfo_unexecuted_blocks=1 00:20:08.790 00:20:08.790 ' 00:20:08.790 07:00:30 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:08.790 07:00:30 -- nvmf/common.sh@7 -- # uname -s 00:20:08.790 07:00:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.790 07:00:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.790 07:00:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.790 07:00:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.790 07:00:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.790 07:00:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.790 07:00:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.790 07:00:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.790 07:00:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.790 07:00:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.790 07:00:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:08.790 07:00:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:08.790 07:00:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.790 07:00:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.790 07:00:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:08.790 07:00:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:08.790 07:00:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.790 07:00:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.790 07:00:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.790 07:00:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.790 07:00:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.790 07:00:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.790 07:00:30 -- paths/export.sh@5 -- # export PATH 00:20:08.790 07:00:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.790 07:00:30 -- nvmf/common.sh@46 -- # : 0 00:20:08.790 07:00:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:08.790 07:00:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:08.790 07:00:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:08.790 07:00:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.790 07:00:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.790 07:00:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:08.790 07:00:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:08.790 07:00:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:08.790 07:00:30 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:08.790 07:00:30 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:08.790 07:00:30 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:08.790 07:00:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:08.790 07:00:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.790 07:00:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:08.790 07:00:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:08.790 07:00:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:08.790 07:00:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.790 07:00:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.790 07:00:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.049 07:00:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:09.049 07:00:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:09.049 07:00:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:09.049 07:00:30 -- common/autotest_common.sh@10 -- # set +x 00:20:15.623 07:00:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:15.623 07:00:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:15.623 07:00:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:15.623 07:00:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:15.623 07:00:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:15.623 07:00:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:15.623 07:00:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:15.623 07:00:36 -- nvmf/common.sh@294 -- # net_devs=() 00:20:15.623 07:00:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:15.623 07:00:36 -- nvmf/common.sh@295 -- # e810=() 00:20:15.623 07:00:36 -- nvmf/common.sh@295 -- # local -ga e810 00:20:15.623 07:00:36 -- nvmf/common.sh@296 -- # x722=() 00:20:15.623 07:00:36 -- nvmf/common.sh@296 -- # local -ga x722 00:20:15.623 07:00:36 -- nvmf/common.sh@297 -- # mlx=() 00:20:15.623 07:00:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:15.623 07:00:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.623 07:00:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.623 07:00:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.623 07:00:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.623 07:00:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.623 07:00:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.623 07:00:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.623 07:00:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.624 07:00:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.624 07:00:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.624 07:00:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.624 07:00:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:15.624 07:00:36 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:15.624 07:00:36 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:15.624 07:00:36 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:15.624 07:00:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:15.624 07:00:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:15.624 07:00:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:15.624 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:15.624 07:00:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:15.624 07:00:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:15.624 07:00:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:15.624 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:15.624 07:00:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:15.624 07:00:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:15.624 07:00:36 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:15.624 07:00:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.624 07:00:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:15.624 07:00:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.624 07:00:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:15.624 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:15.624 07:00:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.624 07:00:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:15.624 07:00:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.624 07:00:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:15.624 07:00:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.624 07:00:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:15.624 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:15.624 07:00:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.624 07:00:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:15.624 07:00:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:15.624 07:00:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:15.624 07:00:36 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:15.624 07:00:36 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:15.624 07:00:36 -- nvmf/common.sh@57 -- # uname 00:20:15.624 07:00:36 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:15.624 07:00:36 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:15.624 07:00:36 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:15.624 07:00:36 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:15.624 07:00:37 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:15.624 07:00:37 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:15.624 07:00:37 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:15.624 07:00:37 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:15.624 07:00:37 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:15.624 07:00:37 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:15.624 07:00:37 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:15.624 07:00:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:15.624 07:00:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:15.624 07:00:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:15.624 07:00:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:15.624 07:00:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:15.624 07:00:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:15.624 07:00:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.624 07:00:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:15.624 07:00:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:15.624 07:00:37 -- nvmf/common.sh@104 -- # continue 2 00:20:15.624 07:00:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:15.624 07:00:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.624 07:00:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:15.624 07:00:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.624 07:00:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:15.624 07:00:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:15.624 07:00:37 -- nvmf/common.sh@104 -- # continue 2 00:20:15.624 07:00:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:15.624 07:00:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:15.624 07:00:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:15.624 07:00:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:15.624 07:00:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:15.624 07:00:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:15.624 07:00:37 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:15.624 07:00:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:15.624 07:00:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:15.624 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:15.624 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:15.624 altname enp217s0f0np0 00:20:15.624 altname ens818f0np0 00:20:15.624 inet 192.168.100.8/24 scope global mlx_0_0 00:20:15.624 valid_lft forever preferred_lft forever 00:20:15.624 07:00:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:15.624 07:00:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:15.624 07:00:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:15.624 07:00:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:15.624 07:00:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:15.624 07:00:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:15.624 07:00:37 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:15.624 07:00:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:15.624 07:00:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:15.624 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:15.624 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:15.624 altname enp217s0f1np1 00:20:15.624 altname ens818f1np1 00:20:15.624 inet 192.168.100.9/24 scope global mlx_0_1 00:20:15.624 valid_lft forever preferred_lft forever 00:20:15.624 07:00:37 -- nvmf/common.sh@410 -- # return 0 00:20:15.624 07:00:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:15.624 07:00:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:15.624 07:00:37 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:15.624 07:00:37 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:15.624 07:00:37 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:15.624 07:00:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:15.624 07:00:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:15.624 07:00:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:15.624 07:00:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:15.624 07:00:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:15.624 07:00:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:15.624 07:00:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.624 07:00:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:15.624 07:00:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:15.624 07:00:37 -- nvmf/common.sh@104 -- # continue 2 00:20:15.624 07:00:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:15.624 07:00:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.624 07:00:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:15.624 07:00:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.624 07:00:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:15.624 07:00:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:15.624 07:00:37 -- nvmf/common.sh@104 -- # continue 2 00:20:15.624 07:00:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:15.624 07:00:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:15.624 07:00:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:15.624 07:00:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:15.624 07:00:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:15.624 07:00:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:15.624 07:00:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:15.624 07:00:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:15.624 07:00:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:15.624 07:00:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:15.624 07:00:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:15.624 07:00:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:15.624 07:00:37 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:15.624 192.168.100.9' 00:20:15.624 07:00:37 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:15.624 192.168.100.9' 00:20:15.624 07:00:37 -- nvmf/common.sh@445 -- # head -n 1 00:20:15.624 07:00:37 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:15.624 07:00:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:15.624 192.168.100.9' 00:20:15.624 07:00:37 -- nvmf/common.sh@446 -- # tail -n +2 00:20:15.624 07:00:37 -- nvmf/common.sh@446 -- # head -n 1 00:20:15.624 07:00:37 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:15.625 07:00:37 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:15.625 07:00:37 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:15.625 07:00:37 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:15.625 07:00:37 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:15.625 07:00:37 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:15.625 07:00:37 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:15.625 07:00:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:15.625 07:00:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:15.625 07:00:37 -- common/autotest_common.sh@10 -- # set +x 00:20:15.625 07:00:37 -- nvmf/common.sh@469 -- # nvmfpid=1388585 00:20:15.625 07:00:37 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:15.625 07:00:37 -- nvmf/common.sh@470 -- # waitforlisten 1388585 00:20:15.625 07:00:37 -- common/autotest_common.sh@829 -- # '[' -z 1388585 ']' 00:20:15.625 07:00:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.625 07:00:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.625 07:00:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.625 07:00:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.625 07:00:37 -- common/autotest_common.sh@10 -- # set +x 00:20:15.884 [2024-12-15 07:00:37.273952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:15.884 [2024-12-15 07:00:37.274024] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.884 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.884 [2024-12-15 07:00:37.344247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:15.884 [2024-12-15 07:00:37.379724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:15.884 [2024-12-15 07:00:37.379854] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.884 [2024-12-15 07:00:37.379864] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.884 [2024-12-15 07:00:37.379873] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.884 [2024-12-15 07:00:37.380010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:15.884 [2024-12-15 07:00:37.380101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:15.884 [2024-12-15 07:00:37.380187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:15.884 [2024-12-15 07:00:37.380189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:16.820 07:00:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.820 07:00:38 -- common/autotest_common.sh@862 -- # return 0 00:20:16.820 07:00:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:16.821 07:00:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:16.821 07:00:38 -- common/autotest_common.sh@10 -- # set +x 00:20:16.821 07:00:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.821 07:00:38 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:16.821 07:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.821 07:00:38 -- common/autotest_common.sh@10 -- # set +x 00:20:16.821 [2024-12-15 07:00:38.161931] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bf39b0/0x1bf7e80) succeed. 00:20:16.821 [2024-12-15 07:00:38.171125] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bf4f50/0x1c39520) succeed. 00:20:16.821 07:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.821 07:00:38 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:16.821 07:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.821 07:00:38 -- common/autotest_common.sh@10 -- # set +x 00:20:16.821 Malloc0 00:20:16.821 07:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.821 07:00:38 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:16.821 07:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.821 07:00:38 -- common/autotest_common.sh@10 -- # set +x 00:20:16.821 07:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.821 07:00:38 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:16.821 07:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.821 07:00:38 -- common/autotest_common.sh@10 -- # set +x 00:20:16.821 07:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.821 07:00:38 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:16.821 07:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.821 07:00:38 -- common/autotest_common.sh@10 -- # set +x 00:20:16.821 [2024-12-15 07:00:38.339887] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:16.821 07:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.821 07:00:38 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:16.821 07:00:38 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:16.821 07:00:38 -- nvmf/common.sh@520 -- # config=() 00:20:16.821 07:00:38 -- nvmf/common.sh@520 -- # local subsystem config 00:20:16.821 07:00:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:16.821 07:00:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:16.821 { 00:20:16.821 "params": { 00:20:16.821 "name": "Nvme$subsystem", 00:20:16.821 "trtype": "$TEST_TRANSPORT", 00:20:16.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.821 "adrfam": "ipv4", 00:20:16.821 "trsvcid": "$NVMF_PORT", 00:20:16.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.821 "hdgst": ${hdgst:-false}, 00:20:16.821 "ddgst": ${ddgst:-false} 00:20:16.821 }, 00:20:16.821 "method": "bdev_nvme_attach_controller" 00:20:16.821 } 00:20:16.821 EOF 00:20:16.821 )") 00:20:16.821 07:00:38 -- nvmf/common.sh@542 -- # cat 00:20:16.821 07:00:38 -- nvmf/common.sh@544 -- # jq . 00:20:16.821 07:00:38 -- nvmf/common.sh@545 -- # IFS=, 00:20:16.821 07:00:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:16.821 "params": { 00:20:16.821 "name": "Nvme1", 00:20:16.821 "trtype": "rdma", 00:20:16.821 "traddr": "192.168.100.8", 00:20:16.821 "adrfam": "ipv4", 00:20:16.821 "trsvcid": "4420", 00:20:16.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.821 "hdgst": false, 00:20:16.821 "ddgst": false 00:20:16.821 }, 00:20:16.821 "method": "bdev_nvme_attach_controller" 00:20:16.821 }' 00:20:16.821 [2024-12-15 07:00:38.388866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:16.821 [2024-12-15 07:00:38.388919] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388744 ] 00:20:16.821 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.821 [2024-12-15 07:00:38.459761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:17.080 [2024-12-15 07:00:38.497410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.080 [2024-12-15 07:00:38.497506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.080 [2024-12-15 07:00:38.497508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.080 [2024-12-15 07:00:38.660932] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:17.080 [2024-12-15 07:00:38.660965] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:17.080 I/O targets: 00:20:17.080 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:17.080 00:20:17.080 00:20:17.080 CUnit - A unit testing framework for C - Version 2.1-3 00:20:17.080 http://cunit.sourceforge.net/ 00:20:17.080 00:20:17.080 00:20:17.080 Suite: bdevio tests on: Nvme1n1 00:20:17.080 Test: blockdev write read block ...passed 00:20:17.080 Test: blockdev write zeroes read block ...passed 00:20:17.080 Test: blockdev write zeroes read no split ...passed 00:20:17.080 Test: blockdev write zeroes read split ...passed 00:20:17.080 Test: blockdev write zeroes read split partial ...passed 00:20:17.080 Test: blockdev reset ...[2024-12-15 07:00:38.690803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:17.080 [2024-12-15 07:00:38.713489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:17.339 [2024-12-15 07:00:38.740470] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:17.339 passed 00:20:17.339 Test: blockdev write read 8 blocks ...passed 00:20:17.339 Test: blockdev write read size > 128k ...passed 00:20:17.339 Test: blockdev write read invalid size ...passed 00:20:17.339 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:17.339 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:17.339 Test: blockdev write read max offset ...passed 00:20:17.339 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:17.339 Test: blockdev writev readv 8 blocks ...passed 00:20:17.340 Test: blockdev writev readv 30 x 1block ...passed 00:20:17.340 Test: blockdev writev readv block ...passed 00:20:17.340 Test: blockdev writev readv size > 128k ...passed 00:20:17.340 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:17.340 Test: blockdev comparev and writev ...[2024-12-15 07:00:38.743337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:17.340 [2024-12-15 07:00:38.743371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:17.340 [2024-12-15 07:00:38.743383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:17.340 [2024-12-15 07:00:38.743393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:17.340 [2024-12-15 07:00:38.743562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:17.340 [2024-12-15 07:00:38.743573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:17.340 [2024-12-15 07:00:38.743584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:17.340 [2024-12-15 07:00:38.743596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:17.340 [2024-12-15 07:00:38.743774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:17.340 [2024-12-15 07:00:38.743784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:17.340 [2024-12-15 07:00:38.743794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:17.340 [2024-12-15 07:00:38.743804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:17.340 [2024-12-15 07:00:38.743970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:17.340 [2024-12-15 07:00:38.743988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:17.340 [2024-12-15 07:00:38.743998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:17.340 [2024-12-15 07:00:38.744006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:17.340 passed 00:20:17.340 Test: blockdev nvme passthru rw ...passed 00:20:17.340 Test: blockdev nvme passthru vendor specific ...[2024-12-15 07:00:38.744270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:17.340 [2024-12-15 07:00:38.744281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:17.340 [2024-12-15 07:00:38.744324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:17.340 [2024-12-15 07:00:38.744333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:17.340 [2024-12-15 07:00:38.744377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:17.340 [2024-12-15 07:00:38.744387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:17.340 [2024-12-15 07:00:38.744433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:17.340 [2024-12-15 07:00:38.744443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:17.340 passed 00:20:17.340 Test: blockdev nvme admin passthru ...passed 00:20:17.340 Test: blockdev copy ...passed 00:20:17.340 00:20:17.340 Run Summary: Type Total Ran Passed Failed Inactive 00:20:17.340 suites 1 1 n/a 0 0 00:20:17.340 tests 23 23 23 0 0 00:20:17.340 asserts 152 152 152 0 n/a 00:20:17.340 00:20:17.340 Elapsed time = 0.171 seconds 00:20:17.340 07:00:38 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.340 07:00:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.340 07:00:38 -- common/autotest_common.sh@10 -- # set +x 00:20:17.340 07:00:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.340 07:00:38 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:17.340 07:00:38 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:17.340 07:00:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:17.340 07:00:38 -- nvmf/common.sh@116 -- # sync 00:20:17.340 07:00:38 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:17.340 07:00:38 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:17.340 07:00:38 -- nvmf/common.sh@119 -- # set +e 00:20:17.340 07:00:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:17.340 07:00:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:17.340 rmmod nvme_rdma 00:20:17.340 rmmod nvme_fabrics 00:20:17.340 07:00:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:17.599 07:00:38 -- nvmf/common.sh@123 -- # set -e 00:20:17.599 07:00:38 -- nvmf/common.sh@124 -- # return 0 00:20:17.599 07:00:38 -- nvmf/common.sh@477 -- # '[' -n 1388585 ']' 00:20:17.599 07:00:38 -- nvmf/common.sh@478 -- # killprocess 1388585 00:20:17.599 07:00:38 -- common/autotest_common.sh@936 -- # '[' -z 1388585 ']' 00:20:17.599 07:00:38 -- common/autotest_common.sh@940 -- # kill -0 1388585 00:20:17.599 07:00:38 -- common/autotest_common.sh@941 -- # uname 00:20:17.599 07:00:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:17.599 07:00:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1388585 00:20:17.599 07:00:39 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:17.599 07:00:39 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:17.599 07:00:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1388585' 00:20:17.599 killing process with pid 1388585 00:20:17.599 07:00:39 -- common/autotest_common.sh@955 -- # kill 1388585 00:20:17.599 07:00:39 -- common/autotest_common.sh@960 -- # wait 1388585 00:20:17.859 07:00:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:17.859 07:00:39 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:17.859 00:20:17.859 real 0m9.081s 00:20:17.859 user 0m10.679s 00:20:17.859 sys 0m5.781s 00:20:17.859 07:00:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:17.859 07:00:39 -- common/autotest_common.sh@10 -- # set +x 00:20:17.859 ************************************ 00:20:17.859 END TEST nvmf_bdevio 00:20:17.859 ************************************ 00:20:17.859 07:00:39 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:20:17.859 07:00:39 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:17.859 07:00:39 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:17.859 07:00:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:17.859 07:00:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:17.859 07:00:39 -- common/autotest_common.sh@10 -- # set +x 00:20:17.859 ************************************ 00:20:17.859 START TEST nvmf_fuzz 00:20:17.859 ************************************ 00:20:17.859 07:00:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:17.859 * Looking for test storage... 00:20:17.859 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:17.859 07:00:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:17.859 07:00:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:17.859 07:00:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:18.118 07:00:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:18.118 07:00:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:18.118 07:00:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:18.118 07:00:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:18.118 07:00:39 -- scripts/common.sh@335 -- # IFS=.-: 00:20:18.118 07:00:39 -- scripts/common.sh@335 -- # read -ra ver1 00:20:18.118 07:00:39 -- scripts/common.sh@336 -- # IFS=.-: 00:20:18.118 07:00:39 -- scripts/common.sh@336 -- # read -ra ver2 00:20:18.118 07:00:39 -- scripts/common.sh@337 -- # local 'op=<' 00:20:18.118 07:00:39 -- scripts/common.sh@339 -- # ver1_l=2 00:20:18.118 07:00:39 -- scripts/common.sh@340 -- # ver2_l=1 00:20:18.118 07:00:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:18.118 07:00:39 -- scripts/common.sh@343 -- # case "$op" in 00:20:18.118 07:00:39 -- scripts/common.sh@344 -- # : 1 00:20:18.118 07:00:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:18.118 07:00:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.118 07:00:39 -- scripts/common.sh@364 -- # decimal 1 00:20:18.118 07:00:39 -- scripts/common.sh@352 -- # local d=1 00:20:18.118 07:00:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:18.118 07:00:39 -- scripts/common.sh@354 -- # echo 1 00:20:18.118 07:00:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:18.118 07:00:39 -- scripts/common.sh@365 -- # decimal 2 00:20:18.118 07:00:39 -- scripts/common.sh@352 -- # local d=2 00:20:18.118 07:00:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:18.118 07:00:39 -- scripts/common.sh@354 -- # echo 2 00:20:18.118 07:00:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:18.118 07:00:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:18.118 07:00:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:18.118 07:00:39 -- scripts/common.sh@367 -- # return 0 00:20:18.118 07:00:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:18.118 07:00:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:18.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.118 --rc genhtml_branch_coverage=1 00:20:18.118 --rc genhtml_function_coverage=1 00:20:18.118 --rc genhtml_legend=1 00:20:18.118 --rc geninfo_all_blocks=1 00:20:18.118 --rc geninfo_unexecuted_blocks=1 00:20:18.118 00:20:18.118 ' 00:20:18.118 07:00:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:18.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.118 --rc genhtml_branch_coverage=1 00:20:18.118 --rc genhtml_function_coverage=1 00:20:18.118 --rc genhtml_legend=1 00:20:18.118 --rc geninfo_all_blocks=1 00:20:18.118 --rc geninfo_unexecuted_blocks=1 00:20:18.118 00:20:18.118 ' 00:20:18.118 07:00:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:18.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.118 --rc genhtml_branch_coverage=1 00:20:18.118 --rc genhtml_function_coverage=1 00:20:18.118 --rc genhtml_legend=1 00:20:18.118 --rc geninfo_all_blocks=1 00:20:18.118 --rc geninfo_unexecuted_blocks=1 00:20:18.118 00:20:18.118 ' 00:20:18.118 07:00:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:18.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.118 --rc genhtml_branch_coverage=1 00:20:18.118 --rc genhtml_function_coverage=1 00:20:18.119 --rc genhtml_legend=1 00:20:18.119 --rc geninfo_all_blocks=1 00:20:18.119 --rc geninfo_unexecuted_blocks=1 00:20:18.119 00:20:18.119 ' 00:20:18.119 07:00:39 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.119 07:00:39 -- nvmf/common.sh@7 -- # uname -s 00:20:18.119 07:00:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.119 07:00:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.119 07:00:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.119 07:00:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.119 07:00:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.119 07:00:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.119 07:00:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.119 07:00:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.119 07:00:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.119 07:00:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.119 07:00:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:18.119 07:00:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:18.119 07:00:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.119 07:00:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.119 07:00:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.119 07:00:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:18.119 07:00:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.119 07:00:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.119 07:00:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.119 07:00:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.119 07:00:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.119 07:00:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.119 07:00:39 -- paths/export.sh@5 -- # export PATH 00:20:18.119 07:00:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.119 07:00:39 -- nvmf/common.sh@46 -- # : 0 00:20:18.119 07:00:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:18.119 07:00:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:18.119 07:00:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:18.119 07:00:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.119 07:00:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.119 07:00:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:18.119 07:00:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:18.119 07:00:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:18.119 07:00:39 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:18.119 07:00:39 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:18.119 07:00:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.119 07:00:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:18.119 07:00:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:18.119 07:00:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:18.119 07:00:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.119 07:00:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.119 07:00:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.119 07:00:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:18.119 07:00:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:18.119 07:00:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:18.119 07:00:39 -- common/autotest_common.sh@10 -- # set +x 00:20:24.687 07:00:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:24.687 07:00:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:24.687 07:00:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:24.687 07:00:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:24.687 07:00:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:24.687 07:00:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:24.687 07:00:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:24.687 07:00:45 -- nvmf/common.sh@294 -- # net_devs=() 00:20:24.687 07:00:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:24.687 07:00:45 -- nvmf/common.sh@295 -- # e810=() 00:20:24.687 07:00:45 -- nvmf/common.sh@295 -- # local -ga e810 00:20:24.687 07:00:45 -- nvmf/common.sh@296 -- # x722=() 00:20:24.687 07:00:45 -- nvmf/common.sh@296 -- # local -ga x722 00:20:24.687 07:00:45 -- nvmf/common.sh@297 -- # mlx=() 00:20:24.687 07:00:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:24.687 07:00:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.687 07:00:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.687 07:00:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.687 07:00:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.687 07:00:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.687 07:00:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.687 07:00:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.687 07:00:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.687 07:00:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.687 07:00:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.687 07:00:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.687 07:00:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:24.687 07:00:45 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:24.687 07:00:45 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:24.687 07:00:45 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:24.687 07:00:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:24.687 07:00:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:24.687 07:00:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:24.687 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:24.687 07:00:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:24.687 07:00:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:24.687 07:00:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:24.687 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:24.687 07:00:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:24.687 07:00:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:24.687 07:00:45 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:24.687 07:00:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.687 07:00:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:24.687 07:00:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.687 07:00:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:24.687 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:24.687 07:00:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.687 07:00:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:24.687 07:00:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.687 07:00:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:24.687 07:00:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.687 07:00:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:24.687 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:24.687 07:00:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.687 07:00:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:24.687 07:00:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:24.687 07:00:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:24.687 07:00:45 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:24.687 07:00:45 -- nvmf/common.sh@57 -- # uname 00:20:24.687 07:00:45 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:24.687 07:00:45 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:24.687 07:00:45 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:24.687 07:00:45 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:24.687 07:00:45 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:24.687 07:00:45 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:24.687 07:00:45 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:24.687 07:00:45 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:24.687 07:00:45 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:24.687 07:00:45 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:24.687 07:00:45 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:24.687 07:00:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:24.687 07:00:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:24.687 07:00:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:24.687 07:00:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:24.687 07:00:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:24.687 07:00:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:24.687 07:00:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.687 07:00:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:24.687 07:00:45 -- nvmf/common.sh@104 -- # continue 2 00:20:24.687 07:00:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:24.687 07:00:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.687 07:00:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.687 07:00:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:24.687 07:00:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:24.687 07:00:45 -- nvmf/common.sh@104 -- # continue 2 00:20:24.687 07:00:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:24.687 07:00:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:24.687 07:00:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:24.687 07:00:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:24.687 07:00:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:24.687 07:00:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:24.688 07:00:45 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:24.688 07:00:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:24.688 07:00:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:24.688 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:24.688 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:24.688 altname enp217s0f0np0 00:20:24.688 altname ens818f0np0 00:20:24.688 inet 192.168.100.8/24 scope global mlx_0_0 00:20:24.688 valid_lft forever preferred_lft forever 00:20:24.688 07:00:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:24.688 07:00:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:24.688 07:00:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:24.688 07:00:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:24.688 07:00:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:24.688 07:00:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:24.688 07:00:45 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:24.688 07:00:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:24.688 07:00:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:24.688 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:24.688 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:24.688 altname enp217s0f1np1 00:20:24.688 altname ens818f1np1 00:20:24.688 inet 192.168.100.9/24 scope global mlx_0_1 00:20:24.688 valid_lft forever preferred_lft forever 00:20:24.688 07:00:45 -- nvmf/common.sh@410 -- # return 0 00:20:24.688 07:00:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:24.688 07:00:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:24.688 07:00:45 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:24.688 07:00:45 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:24.688 07:00:45 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:24.688 07:00:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:24.688 07:00:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:24.688 07:00:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:24.688 07:00:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:24.688 07:00:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:24.688 07:00:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:24.688 07:00:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.688 07:00:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:24.688 07:00:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:24.688 07:00:45 -- nvmf/common.sh@104 -- # continue 2 00:20:24.688 07:00:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:24.688 07:00:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.688 07:00:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:24.688 07:00:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.688 07:00:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:24.688 07:00:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:24.688 07:00:45 -- nvmf/common.sh@104 -- # continue 2 00:20:24.688 07:00:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:24.688 07:00:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:24.688 07:00:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:24.688 07:00:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:24.688 07:00:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:24.688 07:00:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:24.688 07:00:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:24.688 07:00:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:24.688 07:00:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:24.688 07:00:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:24.688 07:00:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:24.688 07:00:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:24.688 07:00:45 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:24.688 192.168.100.9' 00:20:24.688 07:00:45 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:24.688 192.168.100.9' 00:20:24.688 07:00:45 -- nvmf/common.sh@445 -- # head -n 1 00:20:24.688 07:00:45 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:24.688 07:00:45 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:24.688 192.168.100.9' 00:20:24.688 07:00:45 -- nvmf/common.sh@446 -- # tail -n +2 00:20:24.688 07:00:45 -- nvmf/common.sh@446 -- # head -n 1 00:20:24.688 07:00:45 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:24.688 07:00:45 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:24.688 07:00:45 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:24.688 07:00:45 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:24.688 07:00:45 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:24.688 07:00:45 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:24.688 07:00:45 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1392181 00:20:24.688 07:00:45 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:24.688 07:00:45 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:24.688 07:00:45 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1392181 00:20:24.688 07:00:45 -- common/autotest_common.sh@829 -- # '[' -z 1392181 ']' 00:20:24.688 07:00:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.688 07:00:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.688 07:00:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.688 07:00:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.688 07:00:45 -- common/autotest_common.sh@10 -- # set +x 00:20:25.256 07:00:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.256 07:00:46 -- common/autotest_common.sh@862 -- # return 0 00:20:25.256 07:00:46 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:25.256 07:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.256 07:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:25.256 07:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.256 07:00:46 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:25.256 07:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.256 07:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:25.256 Malloc0 00:20:25.256 07:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.256 07:00:46 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:25.256 07:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.256 07:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:25.256 07:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.256 07:00:46 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:25.256 07:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.256 07:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:25.256 07:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.256 07:00:46 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:25.256 07:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.256 07:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:25.256 07:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.256 07:00:46 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:20:25.256 07:00:46 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:20:57.336 Fuzzing completed. Shutting down the fuzz application 00:20:57.336 00:20:57.336 Dumping successful admin opcodes: 00:20:57.336 8, 9, 10, 24, 00:20:57.336 Dumping successful io opcodes: 00:20:57.336 0, 9, 00:20:57.336 NS: 0x200003af1f00 I/O qp, Total commands completed: 991017, total successful commands: 5803, random_seed: 399614592 00:20:57.336 NS: 0x200003af1f00 admin qp, Total commands completed: 125328, total successful commands: 1025, random_seed: 112471936 00:20:57.336 07:01:17 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:57.336 Fuzzing completed. Shutting down the fuzz application 00:20:57.336 00:20:57.336 Dumping successful admin opcodes: 00:20:57.336 24, 00:20:57.336 Dumping successful io opcodes: 00:20:57.336 00:20:57.336 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2636740356 00:20:57.336 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2636816122 00:20:57.336 07:01:18 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.336 07:01:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.336 07:01:18 -- common/autotest_common.sh@10 -- # set +x 00:20:57.336 07:01:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.336 07:01:18 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:57.336 07:01:18 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:57.336 07:01:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:57.336 07:01:18 -- nvmf/common.sh@116 -- # sync 00:20:57.336 07:01:18 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:57.336 07:01:18 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:57.336 07:01:18 -- nvmf/common.sh@119 -- # set +e 00:20:57.336 07:01:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:57.336 07:01:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:57.336 rmmod nvme_rdma 00:20:57.336 rmmod nvme_fabrics 00:20:57.336 07:01:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:57.336 07:01:18 -- nvmf/common.sh@123 -- # set -e 00:20:57.336 07:01:18 -- nvmf/common.sh@124 -- # return 0 00:20:57.336 07:01:18 -- nvmf/common.sh@477 -- # '[' -n 1392181 ']' 00:20:57.336 07:01:18 -- nvmf/common.sh@478 -- # killprocess 1392181 00:20:57.336 07:01:18 -- common/autotest_common.sh@936 -- # '[' -z 1392181 ']' 00:20:57.336 07:01:18 -- common/autotest_common.sh@940 -- # kill -0 1392181 00:20:57.336 07:01:18 -- common/autotest_common.sh@941 -- # uname 00:20:57.336 07:01:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:57.336 07:01:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1392181 00:20:57.336 07:01:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:57.336 07:01:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:57.336 07:01:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1392181' 00:20:57.336 killing process with pid 1392181 00:20:57.336 07:01:18 -- common/autotest_common.sh@955 -- # kill 1392181 00:20:57.336 07:01:18 -- common/autotest_common.sh@960 -- # wait 1392181 00:20:57.336 07:01:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:57.336 07:01:18 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:57.336 07:01:18 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:20:57.336 00:20:57.336 real 0m39.384s 00:20:57.336 user 0m49.680s 00:20:57.336 sys 0m20.513s 00:20:57.336 07:01:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:57.336 07:01:18 -- common/autotest_common.sh@10 -- # set +x 00:20:57.336 ************************************ 00:20:57.336 END TEST nvmf_fuzz 00:20:57.336 ************************************ 00:20:57.336 07:01:18 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:20:57.336 07:01:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:57.336 07:01:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:57.336 07:01:18 -- common/autotest_common.sh@10 -- # set +x 00:20:57.336 ************************************ 00:20:57.336 START TEST nvmf_multiconnection 00:20:57.336 ************************************ 00:20:57.336 07:01:18 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:20:57.336 * Looking for test storage... 00:20:57.336 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:57.336 07:01:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:57.336 07:01:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:57.336 07:01:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:57.336 07:01:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:57.336 07:01:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:57.336 07:01:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:57.336 07:01:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:57.336 07:01:18 -- scripts/common.sh@335 -- # IFS=.-: 00:20:57.336 07:01:18 -- scripts/common.sh@335 -- # read -ra ver1 00:20:57.336 07:01:18 -- scripts/common.sh@336 -- # IFS=.-: 00:20:57.336 07:01:18 -- scripts/common.sh@336 -- # read -ra ver2 00:20:57.336 07:01:18 -- scripts/common.sh@337 -- # local 'op=<' 00:20:57.336 07:01:18 -- scripts/common.sh@339 -- # ver1_l=2 00:20:57.336 07:01:18 -- scripts/common.sh@340 -- # ver2_l=1 00:20:57.336 07:01:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:57.336 07:01:18 -- scripts/common.sh@343 -- # case "$op" in 00:20:57.336 07:01:18 -- scripts/common.sh@344 -- # : 1 00:20:57.336 07:01:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:57.336 07:01:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:57.336 07:01:18 -- scripts/common.sh@364 -- # decimal 1 00:20:57.596 07:01:18 -- scripts/common.sh@352 -- # local d=1 00:20:57.596 07:01:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:57.596 07:01:18 -- scripts/common.sh@354 -- # echo 1 00:20:57.596 07:01:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:57.596 07:01:18 -- scripts/common.sh@365 -- # decimal 2 00:20:57.596 07:01:18 -- scripts/common.sh@352 -- # local d=2 00:20:57.596 07:01:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:57.596 07:01:18 -- scripts/common.sh@354 -- # echo 2 00:20:57.596 07:01:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:57.596 07:01:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:57.596 07:01:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:57.596 07:01:18 -- scripts/common.sh@367 -- # return 0 00:20:57.596 07:01:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:57.596 07:01:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:57.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.596 --rc genhtml_branch_coverage=1 00:20:57.596 --rc genhtml_function_coverage=1 00:20:57.596 --rc genhtml_legend=1 00:20:57.596 --rc geninfo_all_blocks=1 00:20:57.596 --rc geninfo_unexecuted_blocks=1 00:20:57.596 00:20:57.596 ' 00:20:57.596 07:01:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:57.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.596 --rc genhtml_branch_coverage=1 00:20:57.596 --rc genhtml_function_coverage=1 00:20:57.596 --rc genhtml_legend=1 00:20:57.596 --rc geninfo_all_blocks=1 00:20:57.596 --rc geninfo_unexecuted_blocks=1 00:20:57.596 00:20:57.596 ' 00:20:57.596 07:01:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:57.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.596 --rc genhtml_branch_coverage=1 00:20:57.596 --rc genhtml_function_coverage=1 00:20:57.596 --rc genhtml_legend=1 00:20:57.596 --rc geninfo_all_blocks=1 00:20:57.596 --rc geninfo_unexecuted_blocks=1 00:20:57.596 00:20:57.596 ' 00:20:57.596 07:01:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:57.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.596 --rc genhtml_branch_coverage=1 00:20:57.596 --rc genhtml_function_coverage=1 00:20:57.596 --rc genhtml_legend=1 00:20:57.596 --rc geninfo_all_blocks=1 00:20:57.596 --rc geninfo_unexecuted_blocks=1 00:20:57.596 00:20:57.596 ' 00:20:57.596 07:01:18 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.596 07:01:18 -- nvmf/common.sh@7 -- # uname -s 00:20:57.596 07:01:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.596 07:01:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.596 07:01:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.596 07:01:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.596 07:01:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.596 07:01:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.596 07:01:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.596 07:01:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.596 07:01:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.596 07:01:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.596 07:01:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:57.596 07:01:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:57.596 07:01:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.596 07:01:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.596 07:01:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.596 07:01:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:57.596 07:01:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.596 07:01:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.596 07:01:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.596 07:01:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.596 07:01:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.596 07:01:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.596 07:01:19 -- paths/export.sh@5 -- # export PATH 00:20:57.596 07:01:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.596 07:01:19 -- nvmf/common.sh@46 -- # : 0 00:20:57.596 07:01:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:57.596 07:01:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:57.597 07:01:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:57.597 07:01:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.597 07:01:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.597 07:01:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:57.597 07:01:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:57.597 07:01:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:57.597 07:01:19 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:57.597 07:01:19 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:57.597 07:01:19 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:57.597 07:01:19 -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:57.597 07:01:19 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:57.597 07:01:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.597 07:01:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:57.597 07:01:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:57.597 07:01:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:57.597 07:01:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.597 07:01:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.597 07:01:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.597 07:01:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:57.597 07:01:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:57.597 07:01:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:57.597 07:01:19 -- common/autotest_common.sh@10 -- # set +x 00:21:04.167 07:01:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:04.167 07:01:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:04.167 07:01:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:04.167 07:01:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:04.167 07:01:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:04.167 07:01:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:04.167 07:01:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:04.167 07:01:25 -- nvmf/common.sh@294 -- # net_devs=() 00:21:04.167 07:01:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:04.167 07:01:25 -- nvmf/common.sh@295 -- # e810=() 00:21:04.167 07:01:25 -- nvmf/common.sh@295 -- # local -ga e810 00:21:04.167 07:01:25 -- nvmf/common.sh@296 -- # x722=() 00:21:04.167 07:01:25 -- nvmf/common.sh@296 -- # local -ga x722 00:21:04.167 07:01:25 -- nvmf/common.sh@297 -- # mlx=() 00:21:04.167 07:01:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:04.167 07:01:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.167 07:01:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.167 07:01:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.167 07:01:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.167 07:01:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.167 07:01:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.167 07:01:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.167 07:01:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.167 07:01:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.167 07:01:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.167 07:01:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.167 07:01:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:04.167 07:01:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:04.167 07:01:25 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:04.167 07:01:25 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:04.167 07:01:25 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:21:04.167 07:01:25 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:21:04.167 07:01:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:04.167 07:01:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:04.168 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:04.168 07:01:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:04.168 07:01:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:04.168 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:04.168 07:01:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:04.168 07:01:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:04.168 07:01:25 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.168 07:01:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:04.168 07:01:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.168 07:01:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:04.168 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:04.168 07:01:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.168 07:01:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.168 07:01:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:04.168 07:01:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.168 07:01:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:04.168 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:04.168 07:01:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.168 07:01:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:04.168 07:01:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:04.168 07:01:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:04.168 07:01:25 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:04.168 07:01:25 -- nvmf/common.sh@57 -- # uname 00:21:04.168 07:01:25 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:04.168 07:01:25 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:04.168 07:01:25 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:04.168 07:01:25 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:04.168 07:01:25 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:04.168 07:01:25 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:04.168 07:01:25 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:04.168 07:01:25 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:04.168 07:01:25 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:04.168 07:01:25 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:04.168 07:01:25 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:04.168 07:01:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:04.168 07:01:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:04.168 07:01:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:04.168 07:01:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:04.168 07:01:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:04.168 07:01:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:04.168 07:01:25 -- nvmf/common.sh@104 -- # continue 2 00:21:04.168 07:01:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:04.168 07:01:25 -- nvmf/common.sh@104 -- # continue 2 00:21:04.168 07:01:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:04.168 07:01:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:21:04.168 07:01:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:04.168 07:01:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:04.168 07:01:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:04.168 07:01:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:04.168 07:01:25 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:04.168 07:01:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:21:04.168 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:04.168 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:04.168 altname enp217s0f0np0 00:21:04.168 altname ens818f0np0 00:21:04.168 inet 192.168.100.8/24 scope global mlx_0_0 00:21:04.168 valid_lft forever preferred_lft forever 00:21:04.168 07:01:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:04.168 07:01:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:21:04.168 07:01:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:04.168 07:01:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:04.168 07:01:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:04.168 07:01:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:04.168 07:01:25 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:04.168 07:01:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:21:04.168 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:04.168 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:04.168 altname enp217s0f1np1 00:21:04.168 altname ens818f1np1 00:21:04.168 inet 192.168.100.9/24 scope global mlx_0_1 00:21:04.168 valid_lft forever preferred_lft forever 00:21:04.168 07:01:25 -- nvmf/common.sh@410 -- # return 0 00:21:04.168 07:01:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:04.168 07:01:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:04.168 07:01:25 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:04.168 07:01:25 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:04.168 07:01:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:04.168 07:01:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:04.168 07:01:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:04.168 07:01:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:04.168 07:01:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:04.168 07:01:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:04.168 07:01:25 -- nvmf/common.sh@104 -- # continue 2 00:21:04.168 07:01:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:04.168 07:01:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:04.168 07:01:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:04.168 07:01:25 -- nvmf/common.sh@104 -- # continue 2 00:21:04.168 07:01:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:04.168 07:01:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:21:04.168 07:01:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:04.168 07:01:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:04.168 07:01:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:04.168 07:01:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:04.168 07:01:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:04.168 07:01:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:21:04.168 07:01:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:04.168 07:01:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:04.168 07:01:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:04.168 07:01:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:04.168 07:01:25 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:04.168 192.168.100.9' 00:21:04.168 07:01:25 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:04.168 192.168.100.9' 00:21:04.168 07:01:25 -- nvmf/common.sh@445 -- # head -n 1 00:21:04.168 07:01:25 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:04.168 07:01:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:04.168 192.168.100.9' 00:21:04.168 07:01:25 -- nvmf/common.sh@446 -- # head -n 1 00:21:04.168 07:01:25 -- nvmf/common.sh@446 -- # tail -n +2 00:21:04.168 07:01:25 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:04.168 07:01:25 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:04.168 07:01:25 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:04.168 07:01:25 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:04.168 07:01:25 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:04.168 07:01:25 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:04.168 07:01:25 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:04.168 07:01:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:04.168 07:01:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:04.168 07:01:25 -- common/autotest_common.sh@10 -- # set +x 00:21:04.168 07:01:25 -- nvmf/common.sh@469 -- # nvmfpid=1401011 00:21:04.168 07:01:25 -- nvmf/common.sh@470 -- # waitforlisten 1401011 00:21:04.168 07:01:25 -- common/autotest_common.sh@829 -- # '[' -z 1401011 ']' 00:21:04.169 07:01:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.169 07:01:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.169 07:01:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.169 07:01:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.169 07:01:25 -- common/autotest_common.sh@10 -- # set +x 00:21:04.169 07:01:25 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:04.169 [2024-12-15 07:01:25.624669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:04.169 [2024-12-15 07:01:25.624721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.169 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.169 [2024-12-15 07:01:25.696048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:04.169 [2024-12-15 07:01:25.735263] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:04.169 [2024-12-15 07:01:25.735372] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.169 [2024-12-15 07:01:25.735382] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.169 [2024-12-15 07:01:25.735390] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.169 [2024-12-15 07:01:25.735486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.169 [2024-12-15 07:01:25.735582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.169 [2024-12-15 07:01:25.735644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.169 [2024-12-15 07:01:25.735645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.104 07:01:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.104 07:01:26 -- common/autotest_common.sh@862 -- # return 0 00:21:05.104 07:01:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:05.104 07:01:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:05.104 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.104 07:01:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.104 07:01:26 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:05.104 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.104 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.104 [2024-12-15 07:01:26.516381] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6b10d0/0x6b55a0) succeed. 00:21:05.104 [2024-12-15 07:01:26.525577] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6b2670/0x6f6c40) succeed. 00:21:05.104 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.104 07:01:26 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:05.104 07:01:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.104 07:01:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:05.104 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.104 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.104 Malloc1 00:21:05.104 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.104 07:01:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:05.104 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.104 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.104 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.104 07:01:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:05.104 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.104 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.104 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.104 07:01:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:05.104 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.104 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.104 [2024-12-15 07:01:26.706342] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:05.104 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.104 07:01:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.104 07:01:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:05.104 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.104 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.104 Malloc2 00:21:05.104 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.104 07:01:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:05.104 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.104 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.105 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.105 07:01:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:05.105 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.105 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.364 07:01:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 Malloc3 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.364 07:01:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 Malloc4 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.364 07:01:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 Malloc5 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.364 07:01:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 Malloc6 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.364 07:01:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 Malloc7 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.364 07:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.364 07:01:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:21:05.364 07:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.364 07:01:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.624 07:01:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 Malloc8 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.624 07:01:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 Malloc9 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.624 07:01:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 Malloc10 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.624 07:01:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 Malloc11 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:21:05.624 07:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.624 07:01:27 -- common/autotest_common.sh@10 -- # set +x 00:21:05.624 07:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.624 07:01:27 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:05.624 07:01:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.624 07:01:27 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:06.582 07:01:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:06.582 07:01:28 -- common/autotest_common.sh@1187 -- # local i=0 00:21:06.582 07:01:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:06.582 07:01:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:06.582 07:01:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:09.115 07:01:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:09.115 07:01:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:09.115 07:01:30 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:21:09.115 07:01:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:09.115 07:01:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:09.115 07:01:30 -- common/autotest_common.sh@1197 -- # return 0 00:21:09.115 07:01:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:09.115 07:01:30 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:09.682 07:01:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:09.682 07:01:31 -- common/autotest_common.sh@1187 -- # local i=0 00:21:09.682 07:01:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:09.682 07:01:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:09.682 07:01:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:11.585 07:01:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:11.585 07:01:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:11.585 07:01:33 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:21:11.586 07:01:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:11.586 07:01:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:11.586 07:01:33 -- common/autotest_common.sh@1197 -- # return 0 00:21:11.586 07:01:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:11.586 07:01:33 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:21:12.961 07:01:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:12.961 07:01:34 -- common/autotest_common.sh@1187 -- # local i=0 00:21:12.961 07:01:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:12.961 07:01:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:12.961 07:01:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:14.864 07:01:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:14.864 07:01:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:14.864 07:01:36 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:21:14.864 07:01:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:14.864 07:01:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:14.864 07:01:36 -- common/autotest_common.sh@1197 -- # return 0 00:21:14.864 07:01:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:14.864 07:01:36 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:21:15.800 07:01:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:15.800 07:01:37 -- common/autotest_common.sh@1187 -- # local i=0 00:21:15.800 07:01:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:15.800 07:01:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:15.800 07:01:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:17.702 07:01:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:17.702 07:01:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:17.702 07:01:39 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:21:17.702 07:01:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:17.702 07:01:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:17.702 07:01:39 -- common/autotest_common.sh@1197 -- # return 0 00:21:17.702 07:01:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:17.702 07:01:39 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:21:18.639 07:01:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:18.639 07:01:40 -- common/autotest_common.sh@1187 -- # local i=0 00:21:18.639 07:01:40 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:18.639 07:01:40 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:18.639 07:01:40 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:21.173 07:01:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:21.173 07:01:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:21.173 07:01:42 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:21:21.173 07:01:42 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:21.173 07:01:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:21.173 07:01:42 -- common/autotest_common.sh@1197 -- # return 0 00:21:21.173 07:01:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.173 07:01:42 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:21:21.740 07:01:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:21.740 07:01:43 -- common/autotest_common.sh@1187 -- # local i=0 00:21:21.740 07:01:43 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:21.740 07:01:43 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:21.740 07:01:43 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:23.644 07:01:45 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:23.645 07:01:45 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:23.645 07:01:45 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:21:23.645 07:01:45 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:23.645 07:01:45 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:23.645 07:01:45 -- common/autotest_common.sh@1197 -- # return 0 00:21:23.645 07:01:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:23.645 07:01:45 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:21:25.021 07:01:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:25.021 07:01:46 -- common/autotest_common.sh@1187 -- # local i=0 00:21:25.021 07:01:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:25.021 07:01:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:25.021 07:01:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:26.923 07:01:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:26.923 07:01:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:26.923 07:01:48 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:21:26.923 07:01:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:26.923 07:01:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:26.923 07:01:48 -- common/autotest_common.sh@1197 -- # return 0 00:21:26.923 07:01:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:26.923 07:01:48 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:21:27.862 07:01:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:27.862 07:01:49 -- common/autotest_common.sh@1187 -- # local i=0 00:21:27.862 07:01:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:27.862 07:01:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:27.862 07:01:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:29.764 07:01:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:29.764 07:01:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:29.764 07:01:51 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:21:29.764 07:01:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:29.764 07:01:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:29.764 07:01:51 -- common/autotest_common.sh@1197 -- # return 0 00:21:29.764 07:01:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:29.764 07:01:51 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:21:30.700 07:01:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:30.700 07:01:52 -- common/autotest_common.sh@1187 -- # local i=0 00:21:30.700 07:01:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:30.700 07:01:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:30.700 07:01:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:33.232 07:01:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:33.232 07:01:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:33.232 07:01:54 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:21:33.232 07:01:54 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:33.232 07:01:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:33.232 07:01:54 -- common/autotest_common.sh@1197 -- # return 0 00:21:33.232 07:01:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:33.232 07:01:54 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:21:33.798 07:01:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:33.798 07:01:55 -- common/autotest_common.sh@1187 -- # local i=0 00:21:33.798 07:01:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:33.798 07:01:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:33.798 07:01:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:35.701 07:01:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:35.701 07:01:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:35.701 07:01:57 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:21:35.701 07:01:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:35.701 07:01:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:35.701 07:01:57 -- common/autotest_common.sh@1197 -- # return 0 00:21:35.701 07:01:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:35.701 07:01:57 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:21:37.078 07:01:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:37.078 07:01:58 -- common/autotest_common.sh@1187 -- # local i=0 00:21:37.078 07:01:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:37.078 07:01:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:37.078 07:01:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:39.052 07:02:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:39.052 07:02:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:39.052 07:02:00 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:21:39.052 07:02:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:39.052 07:02:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:39.052 07:02:00 -- common/autotest_common.sh@1197 -- # return 0 00:21:39.052 07:02:00 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:39.052 [global] 00:21:39.052 thread=1 00:21:39.052 invalidate=1 00:21:39.052 rw=read 00:21:39.052 time_based=1 00:21:39.052 runtime=10 00:21:39.052 ioengine=libaio 00:21:39.052 direct=1 00:21:39.052 bs=262144 00:21:39.052 iodepth=64 00:21:39.052 norandommap=1 00:21:39.052 numjobs=1 00:21:39.052 00:21:39.052 [job0] 00:21:39.052 filename=/dev/nvme0n1 00:21:39.052 [job1] 00:21:39.052 filename=/dev/nvme10n1 00:21:39.052 [job2] 00:21:39.052 filename=/dev/nvme1n1 00:21:39.052 [job3] 00:21:39.052 filename=/dev/nvme2n1 00:21:39.052 [job4] 00:21:39.052 filename=/dev/nvme3n1 00:21:39.052 [job5] 00:21:39.052 filename=/dev/nvme4n1 00:21:39.052 [job6] 00:21:39.052 filename=/dev/nvme5n1 00:21:39.052 [job7] 00:21:39.052 filename=/dev/nvme6n1 00:21:39.052 [job8] 00:21:39.052 filename=/dev/nvme7n1 00:21:39.052 [job9] 00:21:39.052 filename=/dev/nvme8n1 00:21:39.052 [job10] 00:21:39.052 filename=/dev/nvme9n1 00:21:39.052 Could not set queue depth (nvme0n1) 00:21:39.052 Could not set queue depth (nvme10n1) 00:21:39.052 Could not set queue depth (nvme1n1) 00:21:39.052 Could not set queue depth (nvme2n1) 00:21:39.052 Could not set queue depth (nvme3n1) 00:21:39.052 Could not set queue depth (nvme4n1) 00:21:39.052 Could not set queue depth (nvme5n1) 00:21:39.052 Could not set queue depth (nvme6n1) 00:21:39.052 Could not set queue depth (nvme7n1) 00:21:39.052 Could not set queue depth (nvme8n1) 00:21:39.052 Could not set queue depth (nvme9n1) 00:21:39.341 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:39.341 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:39.341 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:39.341 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:39.341 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:39.341 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:39.341 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:39.342 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:39.342 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:39.342 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:39.342 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:39.342 fio-3.35 00:21:39.342 Starting 11 threads 00:21:51.552 00:21:51.552 job0: (groupid=0, jobs=1): err= 0: pid=1407365: Sun Dec 15 07:02:11 2024 00:21:51.552 read: IOPS=1058, BW=265MiB/s (277MB/s)(2662MiB/10059msec) 00:21:51.552 slat (usec): min=12, max=41178, avg=925.99, stdev=2850.08 00:21:51.552 clat (usec): min=387, max=127422, avg=59469.50, stdev=29843.16 00:21:51.552 lat (usec): min=429, max=127449, avg=60395.49, stdev=30415.44 00:21:51.552 clat percentiles (msec): 00:21:51.552 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 16], 00:21:51.552 | 30.00th=[ 19], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 79], 00:21:51.552 | 70.00th=[ 80], 80.00th=[ 81], 90.00th=[ 84], 95.00th=[ 87], 00:21:51.552 | 99.00th=[ 95], 99.50th=[ 99], 99.90th=[ 123], 99.95th=[ 126], 00:21:51.552 | 99.99th=[ 128] 00:21:51.552 bw ( KiB/s): min=190976, max=1039329, per=6.79%, avg=270872.05, stdev=198455.29, samples=20 00:21:51.552 iops : min= 746, max= 4059, avg=1058.05, stdev=775.04, samples=20 00:21:51.552 lat (usec) : 500=0.03%, 750=0.33%, 1000=0.17% 00:21:51.552 lat (msec) : 2=0.07%, 4=0.37%, 10=0.92%, 20=28.16%, 50=0.27% 00:21:51.552 lat (msec) : 100=69.36%, 250=0.34% 00:21:51.552 cpu : usr=0.45%, sys=4.41%, ctx=2234, majf=0, minf=3659 00:21:51.552 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:51.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.552 issued rwts: total=10648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.552 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.552 job1: (groupid=0, jobs=1): err= 0: pid=1407378: Sun Dec 15 07:02:11 2024 00:21:51.552 read: IOPS=828, BW=207MiB/s (217MB/s)(2082MiB/10052msec) 00:21:51.552 slat (usec): min=17, max=23270, avg=1196.63, stdev=3049.36 00:21:51.552 clat (msec): min=13, max=103, avg=75.97, stdev= 9.79 00:21:51.552 lat (msec): min=13, max=109, avg=77.16, stdev=10.27 00:21:51.552 clat percentiles (msec): 00:21:51.552 | 1.00th=[ 47], 5.00th=[ 57], 10.00th=[ 62], 20.00th=[ 72], 00:21:51.552 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 80], 00:21:51.552 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 87], 00:21:51.552 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 101], 99.95th=[ 103], 00:21:51.552 | 99.99th=[ 104] 00:21:51.552 bw ( KiB/s): min=194560, max=280526, per=5.30%, avg=211555.90, stdev=24360.85, samples=20 00:21:51.552 iops : min= 760, max= 1095, avg=826.35, stdev=95.04, samples=20 00:21:51.552 lat (msec) : 20=0.25%, 50=2.47%, 100=97.14%, 250=0.13% 00:21:51.552 cpu : usr=0.35%, sys=4.06%, ctx=1614, majf=0, minf=4097 00:21:51.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:51.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.553 issued rwts: total=8328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.553 job2: (groupid=0, jobs=1): err= 0: pid=1407391: Sun Dec 15 07:02:11 2024 00:21:51.553 read: IOPS=1825, BW=456MiB/s (478MB/s)(4589MiB/10057msec) 00:21:51.553 slat (usec): min=10, max=21336, avg=533.84, stdev=1424.24 00:21:51.553 clat (msec): min=7, max=123, avg=34.49, stdev=10.24 00:21:51.553 lat (msec): min=9, max=123, avg=35.02, stdev=10.43 00:21:51.553 clat percentiles (msec): 00:21:51.553 | 1.00th=[ 26], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:21:51.553 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:21:51.553 | 70.00th=[ 32], 80.00th=[ 41], 90.00th=[ 47], 95.00th=[ 61], 00:21:51.553 | 99.00th=[ 70], 99.50th=[ 72], 99.90th=[ 105], 99.95th=[ 114], 00:21:51.553 | 99.99th=[ 124] 00:21:51.553 bw ( KiB/s): min=263168, max=581120, per=11.73%, avg=468243.35, stdev=96955.37, samples=20 00:21:51.553 iops : min= 1028, max= 2270, avg=1829.05, stdev=378.77, samples=20 00:21:51.553 lat (msec) : 10=0.05%, 20=0.55%, 50=91.74%, 100=7.50%, 250=0.16% 00:21:51.553 cpu : usr=0.36%, sys=5.99%, ctx=3622, majf=0, minf=4097 00:21:51.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:51.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.553 issued rwts: total=18355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.553 job3: (groupid=0, jobs=1): err= 0: pid=1407398: Sun Dec 15 07:02:11 2024 00:21:51.553 read: IOPS=828, BW=207MiB/s (217MB/s)(2083MiB/10052msec) 00:21:51.553 slat (usec): min=12, max=26809, avg=1196.96, stdev=2989.23 00:21:51.553 clat (msec): min=15, max=108, avg=75.94, stdev= 9.65 00:21:51.553 lat (msec): min=15, max=109, avg=77.14, stdev=10.13 00:21:51.553 clat percentiles (msec): 00:21:51.553 | 1.00th=[ 47], 5.00th=[ 57], 10.00th=[ 62], 20.00th=[ 72], 00:21:51.553 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 80], 00:21:51.553 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 87], 00:21:51.553 | 99.00th=[ 94], 99.50th=[ 97], 99.90th=[ 105], 99.95th=[ 105], 00:21:51.553 | 99.99th=[ 109] 00:21:51.553 bw ( KiB/s): min=187392, max=282570, per=5.30%, avg=211606.90, stdev=24646.34, samples=20 00:21:51.553 iops : min= 732, max= 1103, avg=826.55, stdev=96.16, samples=20 00:21:51.553 lat (msec) : 20=0.25%, 50=2.41%, 100=97.09%, 250=0.24% 00:21:51.553 cpu : usr=0.39%, sys=3.61%, ctx=1570, majf=0, minf=4097 00:21:51.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:51.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.553 issued rwts: total=8330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.553 job4: (groupid=0, jobs=1): err= 0: pid=1407401: Sun Dec 15 07:02:11 2024 00:21:51.553 read: IOPS=2123, BW=531MiB/s (557MB/s)(5322MiB/10024msec) 00:21:51.553 slat (usec): min=11, max=17184, avg=467.23, stdev=1190.36 00:21:51.553 clat (usec): min=2132, max=73286, avg=29633.10, stdev=8801.08 00:21:51.553 lat (usec): min=2350, max=73326, avg=30100.33, stdev=8977.85 00:21:51.553 clat percentiles (usec): 00:21:51.553 | 1.00th=[13698], 5.00th=[15008], 10.00th=[15795], 20.00th=[26608], 00:21:51.553 | 30.00th=[28443], 40.00th=[29492], 50.00th=[30016], 60.00th=[30540], 00:21:51.553 | 70.00th=[31065], 80.00th=[32113], 90.00th=[44827], 95.00th=[46400], 00:21:51.553 | 99.00th=[54264], 99.50th=[58459], 99.90th=[64226], 99.95th=[66323], 00:21:51.553 | 99.99th=[72877] 00:21:51.553 bw ( KiB/s): min=314762, max=1025026, per=13.61%, avg=543431.00, stdev=165739.56, samples=20 00:21:51.553 iops : min= 1229, max= 4004, avg=2122.75, stdev=647.46, samples=20 00:21:51.553 lat (msec) : 4=0.07%, 10=0.23%, 20=16.44%, 50=81.33%, 100=1.93% 00:21:51.553 cpu : usr=0.48%, sys=6.86%, ctx=4042, majf=0, minf=4097 00:21:51.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:51.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.553 issued rwts: total=21288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.553 job5: (groupid=0, jobs=1): err= 0: pid=1407422: Sun Dec 15 07:02:11 2024 00:21:51.553 read: IOPS=825, BW=206MiB/s (216MB/s)(2074MiB/10049msec) 00:21:51.553 slat (usec): min=12, max=33054, avg=1202.63, stdev=3435.51 00:21:51.553 clat (msec): min=17, max=112, avg=76.21, stdev= 9.56 00:21:51.553 lat (msec): min=17, max=117, avg=77.42, stdev=10.14 00:21:51.553 clat percentiles (msec): 00:21:51.553 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 62], 20.00th=[ 72], 00:21:51.553 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 80], 00:21:51.553 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 87], 00:21:51.553 | 99.00th=[ 97], 99.50th=[ 105], 99.90th=[ 109], 99.95th=[ 110], 00:21:51.553 | 99.99th=[ 113] 00:21:51.553 bw ( KiB/s): min=180224, max=284614, per=5.28%, avg=210761.90, stdev=24625.87, samples=20 00:21:51.553 iops : min= 704, max= 1111, avg=823.25, stdev=96.07, samples=20 00:21:51.553 lat (msec) : 20=0.11%, 50=2.40%, 100=96.78%, 250=0.71% 00:21:51.553 cpu : usr=0.21%, sys=2.86%, ctx=1651, majf=0, minf=4097 00:21:51.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:51.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.553 issued rwts: total=8297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.553 job6: (groupid=0, jobs=1): err= 0: pid=1407433: Sun Dec 15 07:02:11 2024 00:21:51.553 read: IOPS=1643, BW=411MiB/s (431MB/s)(4132MiB/10058msec) 00:21:51.553 slat (usec): min=11, max=33180, avg=597.64, stdev=1730.13 00:21:51.553 clat (msec): min=9, max=103, avg=38.31, stdev=13.71 00:21:51.553 lat (msec): min=9, max=103, avg=38.91, stdev=13.98 00:21:51.553 clat percentiles (msec): 00:21:51.553 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 30], 00:21:51.553 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 44], 60.00th=[ 45], 00:21:51.553 | 70.00th=[ 46], 80.00th=[ 48], 90.00th=[ 56], 95.00th=[ 58], 00:21:51.553 | 99.00th=[ 70], 99.50th=[ 73], 99.90th=[ 100], 99.95th=[ 102], 00:21:51.553 | 99.99th=[ 104] 00:21:51.553 bw ( KiB/s): min=261120, max=872448, per=10.56%, avg=421393.20, stdev=163585.72, samples=20 00:21:51.553 iops : min= 1020, max= 3408, avg=1646.05, stdev=639.02, samples=20 00:21:51.553 lat (msec) : 10=0.07%, 20=15.65%, 50=70.48%, 100=13.72%, 250=0.07% 00:21:51.553 cpu : usr=0.56%, sys=5.92%, ctx=3147, majf=0, minf=4097 00:21:51.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:51.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.553 issued rwts: total=16526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.553 job7: (groupid=0, jobs=1): err= 0: pid=1407440: Sun Dec 15 07:02:11 2024 00:21:51.553 read: IOPS=831, BW=208MiB/s (218MB/s)(2091MiB/10051msec) 00:21:51.553 slat (usec): min=11, max=26788, avg=1152.62, stdev=3012.76 00:21:51.553 clat (msec): min=15, max=112, avg=75.68, stdev=10.45 00:21:51.553 lat (msec): min=15, max=112, avg=76.83, stdev=11.01 00:21:51.553 clat percentiles (msec): 00:21:51.553 | 1.00th=[ 38], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 72], 00:21:51.553 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 80], 00:21:51.553 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 87], 00:21:51.553 | 99.00th=[ 94], 99.50th=[ 99], 99.90th=[ 106], 99.95th=[ 107], 00:21:51.553 | 99.99th=[ 113] 00:21:51.553 bw ( KiB/s): min=183296, max=275928, per=5.32%, avg=212426.80, stdev=26091.76, samples=20 00:21:51.553 iops : min= 716, max= 1077, avg=829.75, stdev=101.81, samples=20 00:21:51.553 lat (msec) : 20=0.22%, 50=2.71%, 100=96.68%, 250=0.39% 00:21:51.553 cpu : usr=0.42%, sys=3.94%, ctx=1847, majf=0, minf=4097 00:21:51.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:51.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.553 issued rwts: total=8362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.553 job8: (groupid=0, jobs=1): err= 0: pid=1407464: Sun Dec 15 07:02:11 2024 00:21:51.553 read: IOPS=1530, BW=383MiB/s (401MB/s)(3849MiB/10057msec) 00:21:51.553 slat (usec): min=11, max=22745, avg=647.08, stdev=1931.02 00:21:51.553 clat (msec): min=8, max=113, avg=41.12, stdev=11.86 00:21:51.553 lat (msec): min=9, max=113, avg=41.76, stdev=12.14 00:21:51.553 clat percentiles (msec): 00:21:51.553 | 1.00th=[ 29], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:21:51.553 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 44], 60.00th=[ 46], 00:21:51.553 | 70.00th=[ 47], 80.00th=[ 50], 90.00th=[ 58], 95.00th=[ 62], 00:21:51.553 | 99.00th=[ 71], 99.50th=[ 74], 99.90th=[ 106], 99.95th=[ 111], 00:21:51.553 | 99.99th=[ 114] 00:21:51.553 bw ( KiB/s): min=263168, max=528384, per=9.83%, avg=392474.90, stdev=100458.27, samples=20 00:21:51.553 iops : min= 1028, max= 2064, avg=1533.05, stdev=392.44, samples=20 00:21:51.553 lat (msec) : 10=0.10%, 20=0.44%, 50=80.42%, 100=18.90%, 250=0.14% 00:21:51.553 cpu : usr=0.50%, sys=5.43%, ctx=2851, majf=0, minf=4097 00:21:51.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:51.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.553 issued rwts: total=15394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.553 job9: (groupid=0, jobs=1): err= 0: pid=1407475: Sun Dec 15 07:02:11 2024 00:21:51.553 read: IOPS=2532, BW=633MiB/s (664MB/s)(6369MiB/10058msec) 00:21:51.553 slat (usec): min=11, max=43028, avg=383.74, stdev=1620.11 00:21:51.553 clat (msec): min=12, max=127, avg=24.85, stdev=15.35 00:21:51.554 lat (msec): min=12, max=127, avg=25.24, stdev=15.63 00:21:51.554 clat percentiles (msec): 00:21:51.554 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 15], 20.00th=[ 16], 00:21:51.554 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 17], 00:21:51.554 | 70.00th=[ 31], 80.00th=[ 33], 90.00th=[ 55], 95.00th=[ 59], 00:21:51.554 | 99.00th=[ 70], 99.50th=[ 74], 99.90th=[ 115], 99.95th=[ 121], 00:21:51.554 | 99.99th=[ 124] 00:21:51.554 bw ( KiB/s): min=264208, max=1040896, per=16.30%, avg=650566.10, stdev=332497.42, samples=20 00:21:51.554 iops : min= 1032, max= 4066, avg=2541.25, stdev=1298.84, samples=20 00:21:51.554 lat (msec) : 20=65.62%, 50=23.41%, 100=10.78%, 250=0.18% 00:21:51.554 cpu : usr=0.50%, sys=6.74%, ctx=4691, majf=0, minf=4097 00:21:51.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:51.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.554 issued rwts: total=25476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.554 job10: (groupid=0, jobs=1): err= 0: pid=1407484: Sun Dec 15 07:02:11 2024 00:21:51.554 read: IOPS=1580, BW=395MiB/s (414MB/s)(3960MiB/10025msec) 00:21:51.554 slat (usec): min=11, max=20990, avg=602.44, stdev=1763.42 00:21:51.554 clat (usec): min=9501, max=76540, avg=39857.45, stdev=10921.96 00:21:51.554 lat (usec): min=9743, max=76598, avg=40459.89, stdev=11157.18 00:21:51.554 clat percentiles (usec): 00:21:51.554 | 1.00th=[25822], 5.00th=[28443], 10.00th=[29230], 20.00th=[30278], 00:21:51.554 | 30.00th=[31065], 40.00th=[31589], 50.00th=[34341], 60.00th=[44827], 00:21:51.554 | 70.00th=[45876], 80.00th=[47449], 90.00th=[56886], 95.00th=[60556], 00:21:51.554 | 99.00th=[63701], 99.50th=[67634], 99.90th=[72877], 99.95th=[73925], 00:21:51.554 | 99.99th=[74974] 00:21:51.554 bw ( KiB/s): min=273920, max=525824, per=10.12%, avg=403884.80, stdev=96296.74, samples=20 00:21:51.554 iops : min= 1070, max= 2054, avg=1577.65, stdev=376.18, samples=20 00:21:51.554 lat (msec) : 10=0.04%, 20=0.49%, 50=82.80%, 100=16.66% 00:21:51.554 cpu : usr=0.41%, sys=5.67%, ctx=3246, majf=0, minf=4097 00:21:51.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:51.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:51.554 issued rwts: total=15841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:51.554 00:21:51.554 Run status group 0 (all jobs): 00:21:51.554 READ: bw=3898MiB/s (4087MB/s), 206MiB/s-633MiB/s (216MB/s-664MB/s), io=38.3GiB (41.1GB), run=10024-10059msec 00:21:51.554 00:21:51.554 Disk stats (read/write): 00:21:51.554 nvme0n1: ios=20907/0, merge=0/0, ticks=1217795/0, in_queue=1217795, util=96.50% 00:21:51.554 nvme10n1: ios=16210/0, merge=0/0, ticks=1218698/0, in_queue=1218698, util=96.82% 00:21:51.554 nvme1n1: ios=36332/0, merge=0/0, ticks=1211271/0, in_queue=1211271, util=97.25% 00:21:51.554 nvme2n1: ios=16212/0, merge=0/0, ticks=1218816/0, in_queue=1218816, util=97.52% 00:21:51.554 nvme3n1: ios=41797/0, merge=0/0, ticks=1213127/0, in_queue=1213127, util=97.60% 00:21:51.554 nvme4n1: ios=16166/0, merge=0/0, ticks=1216728/0, in_queue=1216728, util=98.07% 00:21:51.554 nvme5n1: ios=32679/0, merge=0/0, ticks=1214267/0, in_queue=1214267, util=98.28% 00:21:51.554 nvme6n1: ios=16294/0, merge=0/0, ticks=1219648/0, in_queue=1219648, util=98.40% 00:21:51.554 nvme7n1: ios=30411/0, merge=0/0, ticks=1214643/0, in_queue=1214643, util=98.89% 00:21:51.554 nvme8n1: ios=50570/0, merge=0/0, ticks=1211746/0, in_queue=1211746, util=99.13% 00:21:51.554 nvme9n1: ios=30919/0, merge=0/0, ticks=1218661/0, in_queue=1218661, util=99.27% 00:21:51.554 07:02:11 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:51.554 [global] 00:21:51.554 thread=1 00:21:51.554 invalidate=1 00:21:51.554 rw=randwrite 00:21:51.554 time_based=1 00:21:51.554 runtime=10 00:21:51.554 ioengine=libaio 00:21:51.554 direct=1 00:21:51.554 bs=262144 00:21:51.554 iodepth=64 00:21:51.554 norandommap=1 00:21:51.554 numjobs=1 00:21:51.554 00:21:51.554 [job0] 00:21:51.554 filename=/dev/nvme0n1 00:21:51.554 [job1] 00:21:51.554 filename=/dev/nvme10n1 00:21:51.554 [job2] 00:21:51.554 filename=/dev/nvme1n1 00:21:51.554 [job3] 00:21:51.554 filename=/dev/nvme2n1 00:21:51.554 [job4] 00:21:51.554 filename=/dev/nvme3n1 00:21:51.554 [job5] 00:21:51.554 filename=/dev/nvme4n1 00:21:51.554 [job6] 00:21:51.554 filename=/dev/nvme5n1 00:21:51.554 [job7] 00:21:51.554 filename=/dev/nvme6n1 00:21:51.554 [job8] 00:21:51.554 filename=/dev/nvme7n1 00:21:51.554 [job9] 00:21:51.554 filename=/dev/nvme8n1 00:21:51.554 [job10] 00:21:51.554 filename=/dev/nvme9n1 00:21:51.554 Could not set queue depth (nvme0n1) 00:21:51.554 Could not set queue depth (nvme10n1) 00:21:51.554 Could not set queue depth (nvme1n1) 00:21:51.554 Could not set queue depth (nvme2n1) 00:21:51.554 Could not set queue depth (nvme3n1) 00:21:51.554 Could not set queue depth (nvme4n1) 00:21:51.554 Could not set queue depth (nvme5n1) 00:21:51.554 Could not set queue depth (nvme6n1) 00:21:51.554 Could not set queue depth (nvme7n1) 00:21:51.554 Could not set queue depth (nvme8n1) 00:21:51.554 Could not set queue depth (nvme9n1) 00:21:51.554 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:51.554 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:51.554 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:51.554 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:51.554 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:51.554 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:51.554 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:51.554 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:51.554 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:51.554 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:51.554 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:51.554 fio-3.35 00:21:51.554 Starting 11 threads 00:22:01.536 00:22:01.536 job0: (groupid=0, jobs=1): err= 0: pid=1409306: Sun Dec 15 07:02:22 2024 00:22:01.536 write: IOPS=985, BW=246MiB/s (258MB/s)(2478MiB/10057msec); 0 zone resets 00:22:01.536 slat (usec): min=20, max=52899, avg=967.88, stdev=2623.54 00:22:01.536 clat (msec): min=2, max=165, avg=63.95, stdev=22.16 00:22:01.536 lat (msec): min=2, max=165, avg=64.92, stdev=22.58 00:22:01.536 clat percentiles (msec): 00:22:01.536 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 37], 00:22:01.536 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 70], 60.00th=[ 72], 00:22:01.536 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 91], 95.00th=[ 100], 00:22:01.536 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 155], 99.95th=[ 161], 00:22:01.536 | 99.99th=[ 165] 00:22:01.536 bw ( KiB/s): min=150016, max=457728, per=7.21%, avg=252134.40, stdev=88101.82, samples=20 00:22:01.536 iops : min= 586, max= 1788, avg=984.90, stdev=344.15, samples=20 00:22:01.536 lat (msec) : 4=0.13%, 10=0.09%, 20=0.42%, 50=25.15%, 100=69.58% 00:22:01.536 lat (msec) : 250=4.62% 00:22:01.536 cpu : usr=2.37%, sys=3.44%, ctx=2422, majf=0, minf=1 00:22:01.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:01.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:01.536 issued rwts: total=0,9912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:01.536 job1: (groupid=0, jobs=1): err= 0: pid=1409336: Sun Dec 15 07:02:22 2024 00:22:01.536 write: IOPS=1351, BW=338MiB/s (354MB/s)(3402MiB/10067msec); 0 zone resets 00:22:01.536 slat (usec): min=16, max=28243, avg=695.03, stdev=1684.57 00:22:01.536 clat (usec): min=1098, max=158919, avg=46638.85, stdev=27435.27 00:22:01.536 lat (usec): min=1160, max=158977, avg=47333.88, stdev=27870.58 00:22:01.536 clat percentiles (msec): 00:22:01.536 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 18], 00:22:01.536 | 30.00th=[ 19], 40.00th=[ 35], 50.00th=[ 53], 60.00th=[ 56], 00:22:01.536 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 78], 95.00th=[ 95], 00:22:01.536 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 142], 99.95th=[ 146], 00:22:01.536 | 99.99th=[ 153] 00:22:01.536 bw ( KiB/s): min=150016, max=913920, per=9.92%, avg=346700.80, stdev=217652.92, samples=20 00:22:01.536 iops : min= 586, max= 3570, avg=1354.30, stdev=850.21, samples=20 00:22:01.536 lat (msec) : 2=0.17%, 4=0.62%, 10=2.16%, 20=32.40%, 50=11.49% 00:22:01.536 lat (msec) : 100=50.14%, 250=3.01% 00:22:01.536 cpu : usr=2.72%, sys=4.57%, ctx=3503, majf=0, minf=1 00:22:01.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:01.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:01.536 issued rwts: total=0,13606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:01.536 job2: (groupid=0, jobs=1): err= 0: pid=1409353: Sun Dec 15 07:02:22 2024 00:22:01.536 write: IOPS=899, BW=225MiB/s (236MB/s)(2263MiB/10067msec); 0 zone resets 00:22:01.536 slat (usec): min=23, max=21232, avg=1064.86, stdev=2141.42 00:22:01.536 clat (msec): min=6, max=152, avg=70.09, stdev=17.14 00:22:01.536 lat (msec): min=6, max=159, avg=71.15, stdev=17.45 00:22:01.536 clat percentiles (msec): 00:22:01.536 | 1.00th=[ 28], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:22:01.536 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 74], 00:22:01.536 | 70.00th=[ 77], 80.00th=[ 87], 90.00th=[ 94], 95.00th=[ 101], 00:22:01.536 | 99.00th=[ 111], 99.50th=[ 117], 99.90th=[ 140], 99.95th=[ 148], 00:22:01.536 | 99.99th=[ 153] 00:22:01.536 bw ( KiB/s): min=146944, max=306176, per=6.58%, avg=230118.40, stdev=49510.13, samples=20 00:22:01.536 iops : min= 574, max= 1196, avg=898.90, stdev=193.40, samples=20 00:22:01.536 lat (msec) : 10=0.13%, 20=0.72%, 50=2.21%, 100=91.95%, 250=4.99% 00:22:01.536 cpu : usr=2.08%, sys=3.37%, ctx=2351, majf=0, minf=1 00:22:01.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:01.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:01.536 issued rwts: total=0,9052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:01.536 job3: (groupid=0, jobs=1): err= 0: pid=1409364: Sun Dec 15 07:02:22 2024 00:22:01.536 write: IOPS=1199, BW=300MiB/s (314MB/s)(3019MiB/10067msec); 0 zone resets 00:22:01.536 slat (usec): min=19, max=35529, avg=821.22, stdev=1800.97 00:22:01.536 clat (msec): min=13, max=153, avg=52.52, stdev=25.64 00:22:01.536 lat (msec): min=13, max=153, avg=53.34, stdev=26.04 00:22:01.536 clat percentiles (msec): 00:22:01.536 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 19], 20.00th=[ 34], 00:22:01.536 | 30.00th=[ 37], 40.00th=[ 39], 50.00th=[ 54], 60.00th=[ 57], 00:22:01.536 | 70.00th=[ 63], 80.00th=[ 75], 90.00th=[ 91], 95.00th=[ 97], 00:22:01.536 | 99.00th=[ 110], 99.50th=[ 115], 99.90th=[ 142], 99.95th=[ 146], 00:22:01.536 | 99.99th=[ 155] 00:22:01.536 bw ( KiB/s): min=145408, max=751616, per=8.80%, avg=307507.20, stdev=166463.74, samples=20 00:22:01.536 iops : min= 568, max= 2936, avg=1201.20, stdev=650.25, samples=20 00:22:01.536 lat (msec) : 20=15.30%, 50=30.80%, 100=49.91%, 250=3.99% 00:22:01.536 cpu : usr=2.44%, sys=4.59%, ctx=2850, majf=0, minf=1 00:22:01.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:01.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:01.536 issued rwts: total=0,12075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:01.537 job4: (groupid=0, jobs=1): err= 0: pid=1409369: Sun Dec 15 07:02:22 2024 00:22:01.537 write: IOPS=1352, BW=338MiB/s (354MB/s)(3399MiB/10054msec); 0 zone resets 00:22:01.537 slat (usec): min=18, max=63634, avg=703.88, stdev=1707.60 00:22:01.537 clat (msec): min=4, max=132, avg=46.61, stdev=22.60 00:22:01.537 lat (msec): min=4, max=132, avg=47.31, stdev=22.95 00:22:01.537 clat percentiles (msec): 00:22:01.537 | 1.00th=[ 16], 5.00th=[ 18], 10.00th=[ 19], 20.00th=[ 20], 00:22:01.537 | 30.00th=[ 28], 40.00th=[ 38], 50.00th=[ 53], 60.00th=[ 56], 00:22:01.537 | 70.00th=[ 61], 80.00th=[ 70], 90.00th=[ 73], 95.00th=[ 79], 00:22:01.537 | 99.00th=[ 92], 99.50th=[ 100], 99.90th=[ 120], 99.95th=[ 126], 00:22:01.537 | 99.99th=[ 133] 00:22:01.537 bw ( KiB/s): min=192512, max=863744, per=9.91%, avg=346419.20, stdev=196150.09, samples=20 00:22:01.537 iops : min= 752, max= 3374, avg=1353.20, stdev=766.21, samples=20 00:22:01.537 lat (msec) : 10=0.13%, 20=26.49%, 50=20.29%, 100=52.65%, 250=0.43% 00:22:01.537 cpu : usr=2.58%, sys=4.15%, ctx=3233, majf=0, minf=1 00:22:01.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:01.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:01.537 issued rwts: total=0,13595,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:01.537 job5: (groupid=0, jobs=1): err= 0: pid=1409390: Sun Dec 15 07:02:22 2024 00:22:01.537 write: IOPS=1037, BW=259MiB/s (272MB/s)(2607MiB/10054msec); 0 zone resets 00:22:01.537 slat (usec): min=24, max=24186, avg=949.81, stdev=2043.55 00:22:01.537 clat (msec): min=10, max=120, avg=60.74, stdev=13.09 00:22:01.537 lat (msec): min=10, max=120, avg=61.69, stdev=13.34 00:22:01.537 clat percentiles (msec): 00:22:01.537 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 52], 00:22:01.537 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 65], 00:22:01.537 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 75], 95.00th=[ 85], 00:22:01.537 | 99.00th=[ 93], 99.50th=[ 99], 99.90th=[ 113], 99.95th=[ 121], 00:22:01.537 | 99.99th=[ 121] 00:22:01.537 bw ( KiB/s): min=186880, max=398848, per=7.59%, avg=265292.80, stdev=53531.82, samples=20 00:22:01.537 iops : min= 730, max= 1558, avg=1036.30, stdev=209.11, samples=20 00:22:01.537 lat (msec) : 20=0.12%, 50=13.83%, 100=85.62%, 250=0.42% 00:22:01.537 cpu : usr=2.57%, sys=4.17%, ctx=2496, majf=0, minf=1 00:22:01.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:01.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:01.537 issued rwts: total=0,10426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:01.537 job6: (groupid=0, jobs=1): err= 0: pid=1409401: Sun Dec 15 07:02:22 2024 00:22:01.537 write: IOPS=1400, BW=350MiB/s (367MB/s)(3524MiB/10063msec); 0 zone resets 00:22:01.537 slat (usec): min=20, max=14940, avg=697.90, stdev=1449.23 00:22:01.537 clat (msec): min=15, max=160, avg=44.98, stdev=18.59 00:22:01.537 lat (msec): min=15, max=160, avg=45.68, stdev=18.87 00:22:01.537 clat percentiles (msec): 00:22:01.537 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 19], 20.00th=[ 34], 00:22:01.537 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 39], 60.00th=[ 51], 00:22:01.537 | 70.00th=[ 54], 80.00th=[ 57], 90.00th=[ 73], 95.00th=[ 78], 00:22:01.537 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 140], 99.95th=[ 153], 00:22:01.537 | 99.99th=[ 161] 00:22:01.537 bw ( KiB/s): min=167936, max=888832, per=10.28%, avg=359193.60, stdev=155657.62, samples=20 00:22:01.537 iops : min= 656, max= 3472, avg=1403.10, stdev=608.04, samples=20 00:22:01.537 lat (msec) : 20=12.88%, 50=46.21%, 100=40.54%, 250=0.37% 00:22:01.537 cpu : usr=2.81%, sys=4.44%, ctx=3333, majf=0, minf=1 00:22:01.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:01.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:01.537 issued rwts: total=0,14094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:01.537 job7: (groupid=0, jobs=1): err= 0: pid=1409407: Sun Dec 15 07:02:22 2024 00:22:01.537 write: IOPS=1848, BW=462MiB/s (485MB/s)(4629MiB/10015msec); 0 zone resets 00:22:01.537 slat (usec): min=14, max=60002, avg=517.03, stdev=1242.54 00:22:01.537 clat (msec): min=6, max=121, avg=34.09, stdev=19.24 00:22:01.537 lat (msec): min=7, max=129, avg=34.61, stdev=19.52 00:22:01.537 clat percentiles (msec): 00:22:01.537 | 1.00th=[ 16], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 18], 00:22:01.537 | 30.00th=[ 18], 40.00th=[ 19], 50.00th=[ 20], 60.00th=[ 38], 00:22:01.537 | 70.00th=[ 53], 80.00th=[ 55], 90.00th=[ 58], 95.00th=[ 68], 00:22:01.537 | 99.00th=[ 78], 99.50th=[ 82], 99.90th=[ 106], 99.95th=[ 114], 00:22:01.537 | 99.99th=[ 122] 00:22:01.537 bw ( KiB/s): min=244224, max=926720, per=13.51%, avg=472345.60, stdev=253282.51, samples=20 00:22:01.537 iops : min= 954, max= 3620, avg=1845.10, stdev=989.38, samples=20 00:22:01.537 lat (msec) : 10=0.05%, 20=51.83%, 50=14.30%, 100=33.69%, 250=0.14% 00:22:01.537 cpu : usr=3.27%, sys=5.10%, ctx=4211, majf=0, minf=1 00:22:01.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:01.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:01.537 issued rwts: total=0,18514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:01.537 job8: (groupid=0, jobs=1): err= 0: pid=1409416: Sun Dec 15 07:02:22 2024 00:22:01.537 write: IOPS=1255, BW=314MiB/s (329MB/s)(3154MiB/10053msec); 0 zone resets 00:22:01.537 slat (usec): min=20, max=31791, avg=753.28, stdev=1855.37 00:22:01.537 clat (usec): min=1733, max=127492, avg=50223.06, stdev=18425.07 00:22:01.537 lat (usec): min=1762, max=127548, avg=50976.34, stdev=18747.31 00:22:01.537 clat percentiles (msec): 00:22:01.537 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 35], 00:22:01.537 | 30.00th=[ 36], 40.00th=[ 38], 50.00th=[ 49], 60.00th=[ 53], 00:22:01.537 | 70.00th=[ 59], 80.00th=[ 71], 90.00th=[ 74], 95.00th=[ 81], 00:22:01.537 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 116], 99.95th=[ 117], 00:22:01.537 | 99.99th=[ 128] 00:22:01.537 bw ( KiB/s): min=190464, max=535552, per=9.19%, avg=321382.40, stdev=108936.08, samples=20 00:22:01.537 iops : min= 744, max= 2092, avg=1255.40, stdev=425.53, samples=20 00:22:01.537 lat (msec) : 2=0.02%, 4=0.37%, 10=0.86%, 20=0.64%, 50=49.96% 00:22:01.537 lat (msec) : 100=47.90%, 250=0.25% 00:22:01.537 cpu : usr=2.62%, sys=4.22%, ctx=3101, majf=0, minf=1 00:22:01.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:01.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:01.537 issued rwts: total=0,12617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:01.537 job9: (groupid=0, jobs=1): err= 0: pid=1409417: Sun Dec 15 07:02:22 2024 00:22:01.537 write: IOPS=1192, BW=298MiB/s (313MB/s)(2984MiB/10011msec); 0 zone resets 00:22:01.537 slat (usec): min=18, max=43417, avg=821.98, stdev=2086.90 00:22:01.537 clat (msec): min=5, max=133, avg=52.84, stdev=25.42 00:22:01.537 lat (msec): min=6, max=136, avg=53.66, stdev=25.84 00:22:01.537 clat percentiles (msec): 00:22:01.537 | 1.00th=[ 16], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 35], 00:22:01.537 | 30.00th=[ 37], 40.00th=[ 39], 50.00th=[ 54], 60.00th=[ 57], 00:22:01.537 | 70.00th=[ 67], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 96], 00:22:01.537 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 125], 99.95th=[ 129], 00:22:01.537 | 99.99th=[ 131] 00:22:01.537 bw ( KiB/s): min=149504, max=505344, per=7.83%, avg=273839.16, stdev=96906.55, samples=19 00:22:01.537 iops : min= 584, max= 1974, avg=1069.68, stdev=378.54, samples=19 00:22:01.537 lat (msec) : 10=0.30%, 20=16.72%, 50=27.58%, 100=51.94%, 250=3.45% 00:22:01.537 cpu : usr=2.24%, sys=3.26%, ctx=2866, majf=0, minf=1 00:22:01.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:01.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:01.537 issued rwts: total=0,11935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:01.537 job10: (groupid=0, jobs=1): err= 0: pid=1409418: Sun Dec 15 07:02:22 2024 00:22:01.537 write: IOPS=1154, BW=289MiB/s (303MB/s)(2907MiB/10067msec); 0 zone resets 00:22:01.537 slat (usec): min=17, max=48873, avg=825.95, stdev=2069.36 00:22:01.537 clat (usec): min=666, max=158437, avg=54570.19, stdev=25911.46 00:22:01.537 lat (usec): min=984, max=158471, avg=55396.14, stdev=26320.80 00:22:01.537 clat percentiles (msec): 00:22:01.537 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 19], 20.00th=[ 35], 00:22:01.537 | 30.00th=[ 37], 40.00th=[ 51], 50.00th=[ 54], 60.00th=[ 56], 00:22:01.537 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 91], 95.00th=[ 99], 00:22:01.537 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 153], 99.95th=[ 155], 00:22:01.537 | 99.99th=[ 159] 00:22:01.537 bw ( KiB/s): min=141824, max=876544, per=8.47%, avg=296012.80, stdev=163216.01, samples=20 00:22:01.537 iops : min= 554, max= 3424, avg=1156.30, stdev=637.56, samples=20 00:22:01.537 lat (usec) : 750=0.01%, 1000=0.01% 00:22:01.537 lat (msec) : 2=0.22%, 4=0.34%, 10=0.60%, 20=13.40%, 50=25.20% 00:22:01.537 lat (msec) : 100=56.11%, 250=4.11% 00:22:01.537 cpu : usr=2.31%, sys=3.54%, ctx=2879, majf=0, minf=1 00:22:01.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:01.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:01.537 issued rwts: total=0,11626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:01.537 00:22:01.537 Run status group 0 (all jobs): 00:22:01.537 WRITE: bw=3413MiB/s (3579MB/s), 225MiB/s-462MiB/s (236MB/s-485MB/s), io=33.6GiB (36.0GB), run=10011-10067msec 00:22:01.537 00:22:01.537 Disk stats (read/write): 00:22:01.537 nvme0n1: ios=49/19470, merge=0/0, ticks=23/1217624, in_queue=1217647, util=96.71% 00:22:01.537 nvme10n1: ios=0/26905, merge=0/0, ticks=0/1214914, in_queue=1214914, util=96.83% 00:22:01.537 nvme1n1: ios=0/17796, merge=0/0, ticks=0/1213022, in_queue=1213022, util=97.16% 00:22:01.537 nvme2n1: ios=0/23842, merge=0/0, ticks=0/1213514, in_queue=1213514, util=97.35% 00:22:01.538 nvme3n1: ios=0/26839, merge=0/0, ticks=0/1218823, in_queue=1218823, util=97.44% 00:22:01.538 nvme4n1: ios=0/20493, merge=0/0, ticks=0/1214816, in_queue=1214816, util=97.85% 00:22:01.538 nvme5n1: ios=0/27893, merge=0/0, ticks=0/1216356, in_queue=1216356, util=98.03% 00:22:01.538 nvme6n1: ios=0/35991, merge=0/0, ticks=0/1222580, in_queue=1222580, util=98.16% 00:22:01.538 nvme7n1: ios=0/24883, merge=0/0, ticks=0/1217949, in_queue=1217949, util=98.65% 00:22:01.538 nvme8n1: ios=0/22834, merge=0/0, ticks=0/1219620, in_queue=1219620, util=98.88% 00:22:01.538 nvme9n1: ios=0/22951, merge=0/0, ticks=0/1213927, in_queue=1213927, util=99.03% 00:22:01.538 07:02:22 -- target/multiconnection.sh@36 -- # sync 00:22:01.538 07:02:22 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:01.538 07:02:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.538 07:02:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:02.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:02.105 07:02:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:02.105 07:02:23 -- common/autotest_common.sh@1208 -- # local i=0 00:22:02.105 07:02:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:02.105 07:02:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:22:02.105 07:02:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:02.105 07:02:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:22:02.105 07:02:23 -- common/autotest_common.sh@1220 -- # return 0 00:22:02.105 07:02:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:02.105 07:02:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.105 07:02:23 -- common/autotest_common.sh@10 -- # set +x 00:22:02.105 07:02:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.105 07:02:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:02.105 07:02:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:03.042 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:03.042 07:02:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:03.042 07:02:24 -- common/autotest_common.sh@1208 -- # local i=0 00:22:03.042 07:02:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:03.042 07:02:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:22:03.042 07:02:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:03.042 07:02:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:22:03.042 07:02:24 -- common/autotest_common.sh@1220 -- # return 0 00:22:03.042 07:02:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:03.042 07:02:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.042 07:02:24 -- common/autotest_common.sh@10 -- # set +x 00:22:03.042 07:02:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.042 07:02:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.042 07:02:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:03.979 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:03.979 07:02:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:03.979 07:02:25 -- common/autotest_common.sh@1208 -- # local i=0 00:22:03.979 07:02:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:03.979 07:02:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:22:03.979 07:02:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:03.979 07:02:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:22:03.979 07:02:25 -- common/autotest_common.sh@1220 -- # return 0 00:22:03.979 07:02:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:03.979 07:02:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.979 07:02:25 -- common/autotest_common.sh@10 -- # set +x 00:22:03.979 07:02:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.979 07:02:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.979 07:02:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:04.915 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:04.915 07:02:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:04.915 07:02:26 -- common/autotest_common.sh@1208 -- # local i=0 00:22:04.915 07:02:26 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:04.915 07:02:26 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:22:04.915 07:02:26 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:04.915 07:02:26 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:22:04.915 07:02:26 -- common/autotest_common.sh@1220 -- # return 0 00:22:04.915 07:02:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:04.915 07:02:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.915 07:02:26 -- common/autotest_common.sh@10 -- # set +x 00:22:04.915 07:02:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.915 07:02:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.915 07:02:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:05.851 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:06.110 07:02:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:06.110 07:02:27 -- common/autotest_common.sh@1208 -- # local i=0 00:22:06.110 07:02:27 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:06.110 07:02:27 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:22:06.110 07:02:27 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:06.110 07:02:27 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:22:06.110 07:02:27 -- common/autotest_common.sh@1220 -- # return 0 00:22:06.110 07:02:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:06.110 07:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.110 07:02:27 -- common/autotest_common.sh@10 -- # set +x 00:22:06.110 07:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.110 07:02:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.110 07:02:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:07.046 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:07.046 07:02:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:07.046 07:02:28 -- common/autotest_common.sh@1208 -- # local i=0 00:22:07.046 07:02:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:07.046 07:02:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:22:07.046 07:02:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:07.046 07:02:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:22:07.046 07:02:28 -- common/autotest_common.sh@1220 -- # return 0 00:22:07.046 07:02:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:07.046 07:02:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.046 07:02:28 -- common/autotest_common.sh@10 -- # set +x 00:22:07.046 07:02:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.046 07:02:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:07.046 07:02:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:07.981 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:07.981 07:02:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:07.981 07:02:29 -- common/autotest_common.sh@1208 -- # local i=0 00:22:07.981 07:02:29 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:07.981 07:02:29 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:22:07.981 07:02:29 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:22:07.981 07:02:29 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:07.981 07:02:29 -- common/autotest_common.sh@1220 -- # return 0 00:22:07.981 07:02:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:07.981 07:02:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.981 07:02:29 -- common/autotest_common.sh@10 -- # set +x 00:22:07.981 07:02:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.981 07:02:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:07.982 07:02:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:08.916 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:08.916 07:02:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:08.916 07:02:30 -- common/autotest_common.sh@1208 -- # local i=0 00:22:08.916 07:02:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:08.916 07:02:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:22:08.916 07:02:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:22:08.916 07:02:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:08.916 07:02:30 -- common/autotest_common.sh@1220 -- # return 0 00:22:08.916 07:02:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:08.916 07:02:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.916 07:02:30 -- common/autotest_common.sh@10 -- # set +x 00:22:08.916 07:02:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.916 07:02:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:08.916 07:02:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:09.852 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:09.852 07:02:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:09.852 07:02:31 -- common/autotest_common.sh@1208 -- # local i=0 00:22:09.852 07:02:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:09.852 07:02:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:22:09.852 07:02:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:09.852 07:02:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:22:10.110 07:02:31 -- common/autotest_common.sh@1220 -- # return 0 00:22:10.110 07:02:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:10.111 07:02:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.111 07:02:31 -- common/autotest_common.sh@10 -- # set +x 00:22:10.111 07:02:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.111 07:02:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:10.111 07:02:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:11.046 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:11.046 07:02:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:11.046 07:02:32 -- common/autotest_common.sh@1208 -- # local i=0 00:22:11.046 07:02:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:11.046 07:02:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:22:11.046 07:02:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:11.046 07:02:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:22:11.046 07:02:32 -- common/autotest_common.sh@1220 -- # return 0 00:22:11.046 07:02:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:11.046 07:02:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.046 07:02:32 -- common/autotest_common.sh@10 -- # set +x 00:22:11.046 07:02:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.046 07:02:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.046 07:02:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:11.982 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:11.982 07:02:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:11.982 07:02:33 -- common/autotest_common.sh@1208 -- # local i=0 00:22:11.982 07:02:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:11.982 07:02:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:22:11.982 07:02:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:11.982 07:02:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:22:11.982 07:02:33 -- common/autotest_common.sh@1220 -- # return 0 00:22:11.982 07:02:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:11.982 07:02:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.982 07:02:33 -- common/autotest_common.sh@10 -- # set +x 00:22:11.982 07:02:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.982 07:02:33 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:11.982 07:02:33 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:11.982 07:02:33 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:11.982 07:02:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:11.982 07:02:33 -- nvmf/common.sh@116 -- # sync 00:22:11.982 07:02:33 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:22:11.982 07:02:33 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:22:11.982 07:02:33 -- nvmf/common.sh@119 -- # set +e 00:22:11.982 07:02:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:11.982 07:02:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:22:11.982 rmmod nvme_rdma 00:22:11.982 rmmod nvme_fabrics 00:22:11.982 07:02:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:11.982 07:02:33 -- nvmf/common.sh@123 -- # set -e 00:22:11.982 07:02:33 -- nvmf/common.sh@124 -- # return 0 00:22:11.982 07:02:33 -- nvmf/common.sh@477 -- # '[' -n 1401011 ']' 00:22:11.982 07:02:33 -- nvmf/common.sh@478 -- # killprocess 1401011 00:22:11.982 07:02:33 -- common/autotest_common.sh@936 -- # '[' -z 1401011 ']' 00:22:11.982 07:02:33 -- common/autotest_common.sh@940 -- # kill -0 1401011 00:22:11.982 07:02:33 -- common/autotest_common.sh@941 -- # uname 00:22:11.983 07:02:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:11.983 07:02:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1401011 00:22:11.983 07:02:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:11.983 07:02:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:11.983 07:02:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1401011' 00:22:11.983 killing process with pid 1401011 00:22:11.983 07:02:33 -- common/autotest_common.sh@955 -- # kill 1401011 00:22:11.983 07:02:33 -- common/autotest_common.sh@960 -- # wait 1401011 00:22:12.550 07:02:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:12.550 07:02:34 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:22:12.550 00:22:12.550 real 1m15.265s 00:22:12.550 user 4m54.533s 00:22:12.550 sys 0m18.709s 00:22:12.550 07:02:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:12.550 07:02:34 -- common/autotest_common.sh@10 -- # set +x 00:22:12.550 ************************************ 00:22:12.550 END TEST nvmf_multiconnection 00:22:12.550 ************************************ 00:22:12.550 07:02:34 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:12.550 07:02:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:12.550 07:02:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:12.550 07:02:34 -- common/autotest_common.sh@10 -- # set +x 00:22:12.550 ************************************ 00:22:12.550 START TEST nvmf_initiator_timeout 00:22:12.550 ************************************ 00:22:12.550 07:02:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:12.809 * Looking for test storage... 00:22:12.809 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:12.809 07:02:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:12.809 07:02:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:12.809 07:02:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:12.809 07:02:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:12.809 07:02:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:12.809 07:02:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:12.809 07:02:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:12.809 07:02:34 -- scripts/common.sh@335 -- # IFS=.-: 00:22:12.809 07:02:34 -- scripts/common.sh@335 -- # read -ra ver1 00:22:12.809 07:02:34 -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.809 07:02:34 -- scripts/common.sh@336 -- # read -ra ver2 00:22:12.809 07:02:34 -- scripts/common.sh@337 -- # local 'op=<' 00:22:12.809 07:02:34 -- scripts/common.sh@339 -- # ver1_l=2 00:22:12.809 07:02:34 -- scripts/common.sh@340 -- # ver2_l=1 00:22:12.809 07:02:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:12.809 07:02:34 -- scripts/common.sh@343 -- # case "$op" in 00:22:12.809 07:02:34 -- scripts/common.sh@344 -- # : 1 00:22:12.809 07:02:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:12.809 07:02:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.809 07:02:34 -- scripts/common.sh@364 -- # decimal 1 00:22:12.809 07:02:34 -- scripts/common.sh@352 -- # local d=1 00:22:12.809 07:02:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.809 07:02:34 -- scripts/common.sh@354 -- # echo 1 00:22:12.809 07:02:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:12.809 07:02:34 -- scripts/common.sh@365 -- # decimal 2 00:22:12.809 07:02:34 -- scripts/common.sh@352 -- # local d=2 00:22:12.809 07:02:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.809 07:02:34 -- scripts/common.sh@354 -- # echo 2 00:22:12.809 07:02:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:12.809 07:02:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:12.809 07:02:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:12.809 07:02:34 -- scripts/common.sh@367 -- # return 0 00:22:12.809 07:02:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.809 07:02:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:12.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.809 --rc genhtml_branch_coverage=1 00:22:12.809 --rc genhtml_function_coverage=1 00:22:12.809 --rc genhtml_legend=1 00:22:12.809 --rc geninfo_all_blocks=1 00:22:12.809 --rc geninfo_unexecuted_blocks=1 00:22:12.809 00:22:12.809 ' 00:22:12.809 07:02:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:12.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.809 --rc genhtml_branch_coverage=1 00:22:12.809 --rc genhtml_function_coverage=1 00:22:12.809 --rc genhtml_legend=1 00:22:12.809 --rc geninfo_all_blocks=1 00:22:12.809 --rc geninfo_unexecuted_blocks=1 00:22:12.809 00:22:12.809 ' 00:22:12.809 07:02:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:12.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.809 --rc genhtml_branch_coverage=1 00:22:12.809 --rc genhtml_function_coverage=1 00:22:12.809 --rc genhtml_legend=1 00:22:12.809 --rc geninfo_all_blocks=1 00:22:12.809 --rc geninfo_unexecuted_blocks=1 00:22:12.809 00:22:12.809 ' 00:22:12.809 07:02:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:12.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.810 --rc genhtml_branch_coverage=1 00:22:12.810 --rc genhtml_function_coverage=1 00:22:12.810 --rc genhtml_legend=1 00:22:12.810 --rc geninfo_all_blocks=1 00:22:12.810 --rc geninfo_unexecuted_blocks=1 00:22:12.810 00:22:12.810 ' 00:22:12.810 07:02:34 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.810 07:02:34 -- nvmf/common.sh@7 -- # uname -s 00:22:12.810 07:02:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.810 07:02:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.810 07:02:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.810 07:02:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.810 07:02:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.810 07:02:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.810 07:02:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.810 07:02:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.810 07:02:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.810 07:02:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.810 07:02:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:12.810 07:02:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:12.810 07:02:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.810 07:02:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.810 07:02:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.810 07:02:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:12.810 07:02:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.810 07:02:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.810 07:02:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.810 07:02:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.810 07:02:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.810 07:02:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.810 07:02:34 -- paths/export.sh@5 -- # export PATH 00:22:12.810 07:02:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.810 07:02:34 -- nvmf/common.sh@46 -- # : 0 00:22:12.810 07:02:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:12.810 07:02:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:12.810 07:02:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:12.810 07:02:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.810 07:02:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.810 07:02:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:12.810 07:02:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:12.810 07:02:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:12.810 07:02:34 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:12.810 07:02:34 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:12.810 07:02:34 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:12.810 07:02:34 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:22:12.810 07:02:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.810 07:02:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:12.810 07:02:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:12.810 07:02:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:12.810 07:02:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.810 07:02:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.810 07:02:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.810 07:02:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:12.810 07:02:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:12.810 07:02:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:12.810 07:02:34 -- common/autotest_common.sh@10 -- # set +x 00:22:19.378 07:02:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:19.378 07:02:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:19.378 07:02:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:19.378 07:02:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:19.378 07:02:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:19.378 07:02:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:19.378 07:02:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:19.378 07:02:40 -- nvmf/common.sh@294 -- # net_devs=() 00:22:19.378 07:02:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:19.378 07:02:40 -- nvmf/common.sh@295 -- # e810=() 00:22:19.378 07:02:40 -- nvmf/common.sh@295 -- # local -ga e810 00:22:19.378 07:02:40 -- nvmf/common.sh@296 -- # x722=() 00:22:19.378 07:02:40 -- nvmf/common.sh@296 -- # local -ga x722 00:22:19.378 07:02:40 -- nvmf/common.sh@297 -- # mlx=() 00:22:19.378 07:02:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:19.378 07:02:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.378 07:02:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.378 07:02:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.378 07:02:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.378 07:02:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.378 07:02:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.378 07:02:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.378 07:02:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.378 07:02:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.378 07:02:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.378 07:02:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.378 07:02:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:19.378 07:02:40 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:22:19.378 07:02:40 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:22:19.378 07:02:40 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:22:19.378 07:02:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:19.378 07:02:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:19.378 07:02:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:19.378 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:19.378 07:02:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:19.378 07:02:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:19.378 07:02:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:19.378 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:19.378 07:02:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:19.378 07:02:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:19.378 07:02:40 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:19.378 07:02:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.378 07:02:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:19.378 07:02:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.378 07:02:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:19.378 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:19.378 07:02:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.378 07:02:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:19.378 07:02:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.378 07:02:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:19.378 07:02:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.378 07:02:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:19.378 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:19.378 07:02:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.378 07:02:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:19.378 07:02:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:19.378 07:02:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@408 -- # rdma_device_init 00:22:19.378 07:02:40 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:22:19.378 07:02:40 -- nvmf/common.sh@57 -- # uname 00:22:19.378 07:02:40 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:22:19.378 07:02:40 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:22:19.378 07:02:40 -- nvmf/common.sh@62 -- # modprobe ib_core 00:22:19.378 07:02:40 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:22:19.378 07:02:40 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:22:19.378 07:02:40 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:22:19.378 07:02:40 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:22:19.378 07:02:40 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:22:19.378 07:02:40 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:22:19.378 07:02:40 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:19.378 07:02:40 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:22:19.378 07:02:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:19.378 07:02:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:19.378 07:02:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:19.378 07:02:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:19.378 07:02:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:19.378 07:02:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:19.378 07:02:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:19.378 07:02:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:19.378 07:02:40 -- nvmf/common.sh@104 -- # continue 2 00:22:19.378 07:02:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:19.378 07:02:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:19.378 07:02:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:19.378 07:02:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:19.378 07:02:40 -- nvmf/common.sh@104 -- # continue 2 00:22:19.378 07:02:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:19.378 07:02:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:22:19.378 07:02:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:19.378 07:02:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:19.378 07:02:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:19.378 07:02:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:19.378 07:02:40 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:22:19.378 07:02:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:22:19.378 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:19.378 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:19.378 altname enp217s0f0np0 00:22:19.378 altname ens818f0np0 00:22:19.378 inet 192.168.100.8/24 scope global mlx_0_0 00:22:19.378 valid_lft forever preferred_lft forever 00:22:19.378 07:02:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:19.378 07:02:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:22:19.378 07:02:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:19.378 07:02:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:19.378 07:02:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:19.378 07:02:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:19.378 07:02:40 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:22:19.378 07:02:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:22:19.378 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:19.378 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:19.378 altname enp217s0f1np1 00:22:19.378 altname ens818f1np1 00:22:19.378 inet 192.168.100.9/24 scope global mlx_0_1 00:22:19.378 valid_lft forever preferred_lft forever 00:22:19.378 07:02:40 -- nvmf/common.sh@410 -- # return 0 00:22:19.378 07:02:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:19.378 07:02:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:19.378 07:02:40 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:22:19.378 07:02:40 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:22:19.378 07:02:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:19.378 07:02:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:19.378 07:02:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:19.378 07:02:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:19.378 07:02:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:19.378 07:02:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:19.378 07:02:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:19.378 07:02:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:19.378 07:02:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:19.379 07:02:40 -- nvmf/common.sh@104 -- # continue 2 00:22:19.379 07:02:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:19.379 07:02:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:19.379 07:02:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:19.379 07:02:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:19.379 07:02:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:19.379 07:02:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:19.379 07:02:40 -- nvmf/common.sh@104 -- # continue 2 00:22:19.379 07:02:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:19.379 07:02:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:22:19.379 07:02:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:19.379 07:02:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:19.379 07:02:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:19.379 07:02:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:19.379 07:02:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:19.379 07:02:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:22:19.379 07:02:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:19.379 07:02:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:19.379 07:02:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:19.379 07:02:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:19.379 07:02:40 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:22:19.379 192.168.100.9' 00:22:19.379 07:02:40 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:22:19.379 192.168.100.9' 00:22:19.379 07:02:40 -- nvmf/common.sh@445 -- # head -n 1 00:22:19.379 07:02:40 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:19.379 07:02:40 -- nvmf/common.sh@446 -- # tail -n +2 00:22:19.379 07:02:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:19.379 192.168.100.9' 00:22:19.379 07:02:40 -- nvmf/common.sh@446 -- # head -n 1 00:22:19.379 07:02:40 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:19.379 07:02:40 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:22:19.379 07:02:40 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:19.379 07:02:40 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:22:19.379 07:02:40 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:22:19.379 07:02:40 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:22:19.379 07:02:40 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:19.379 07:02:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:19.379 07:02:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:19.379 07:02:40 -- common/autotest_common.sh@10 -- # set +x 00:22:19.379 07:02:40 -- nvmf/common.sh@469 -- # nvmfpid=1416136 00:22:19.379 07:02:40 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:19.379 07:02:40 -- nvmf/common.sh@470 -- # waitforlisten 1416136 00:22:19.379 07:02:40 -- common/autotest_common.sh@829 -- # '[' -z 1416136 ']' 00:22:19.379 07:02:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.379 07:02:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.379 07:02:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.379 07:02:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.379 07:02:40 -- common/autotest_common.sh@10 -- # set +x 00:22:19.379 [2024-12-15 07:02:40.625112] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:19.379 [2024-12-15 07:02:40.625163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.379 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.379 [2024-12-15 07:02:40.696839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.379 [2024-12-15 07:02:40.734939] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:19.379 [2024-12-15 07:02:40.735062] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.379 [2024-12-15 07:02:40.735073] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.379 [2024-12-15 07:02:40.735082] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.379 [2024-12-15 07:02:40.735135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.379 [2024-12-15 07:02:40.735221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.379 [2024-12-15 07:02:40.735282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.379 [2024-12-15 07:02:40.735283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.946 07:02:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.946 07:02:41 -- common/autotest_common.sh@862 -- # return 0 00:22:19.946 07:02:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:19.946 07:02:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:19.946 07:02:41 -- common/autotest_common.sh@10 -- # set +x 00:22:19.946 07:02:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.946 07:02:41 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:19.946 07:02:41 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:19.946 07:02:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.946 07:02:41 -- common/autotest_common.sh@10 -- # set +x 00:22:19.946 Malloc0 00:22:19.946 07:02:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.946 07:02:41 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:19.946 07:02:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.946 07:02:41 -- common/autotest_common.sh@10 -- # set +x 00:22:19.946 Delay0 00:22:19.946 07:02:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.946 07:02:41 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:19.946 07:02:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.946 07:02:41 -- common/autotest_common.sh@10 -- # set +x 00:22:19.946 [2024-12-15 07:02:41.552630] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x83c5b0/0x846980) succeed. 00:22:19.946 [2024-12-15 07:02:41.562589] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x83db50/0x888020) succeed. 00:22:20.204 07:02:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.204 07:02:41 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:20.204 07:02:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.204 07:02:41 -- common/autotest_common.sh@10 -- # set +x 00:22:20.204 07:02:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.204 07:02:41 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:20.204 07:02:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.204 07:02:41 -- common/autotest_common.sh@10 -- # set +x 00:22:20.204 07:02:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.204 07:02:41 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:20.204 07:02:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.204 07:02:41 -- common/autotest_common.sh@10 -- # set +x 00:22:20.204 [2024-12-15 07:02:41.705594] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:20.205 07:02:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.205 07:02:41 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:21.139 07:02:42 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:21.140 07:02:42 -- common/autotest_common.sh@1187 -- # local i=0 00:22:21.140 07:02:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:21.140 07:02:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:21.140 07:02:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:23.672 07:02:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:23.672 07:02:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:23.672 07:02:44 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:22:23.672 07:02:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:23.672 07:02:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:23.672 07:02:44 -- common/autotest_common.sh@1197 -- # return 0 00:22:23.672 07:02:44 -- target/initiator_timeout.sh@35 -- # fio_pid=1416779 00:22:23.672 07:02:44 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:23.672 07:02:44 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:23.672 [global] 00:22:23.672 thread=1 00:22:23.672 invalidate=1 00:22:23.672 rw=write 00:22:23.672 time_based=1 00:22:23.672 runtime=60 00:22:23.672 ioengine=libaio 00:22:23.672 direct=1 00:22:23.672 bs=4096 00:22:23.672 iodepth=1 00:22:23.672 norandommap=0 00:22:23.672 numjobs=1 00:22:23.672 00:22:23.672 verify_dump=1 00:22:23.672 verify_backlog=512 00:22:23.672 verify_state_save=0 00:22:23.672 do_verify=1 00:22:23.672 verify=crc32c-intel 00:22:23.672 [job0] 00:22:23.672 filename=/dev/nvme0n1 00:22:23.672 Could not set queue depth (nvme0n1) 00:22:23.672 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:23.672 fio-3.35 00:22:23.672 Starting 1 thread 00:22:26.272 07:02:47 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:26.272 07:02:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.272 07:02:47 -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 true 00:22:26.272 07:02:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.272 07:02:47 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:26.272 07:02:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.272 07:02:47 -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 true 00:22:26.272 07:02:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.272 07:02:47 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:26.272 07:02:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.272 07:02:47 -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 true 00:22:26.272 07:02:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.272 07:02:47 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:26.272 07:02:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.272 07:02:47 -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 true 00:22:26.272 07:02:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.272 07:02:47 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:29.561 07:02:50 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:29.561 07:02:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.561 07:02:50 -- common/autotest_common.sh@10 -- # set +x 00:22:29.561 true 00:22:29.561 07:02:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.561 07:02:50 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:29.561 07:02:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.561 07:02:50 -- common/autotest_common.sh@10 -- # set +x 00:22:29.561 true 00:22:29.561 07:02:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.561 07:02:50 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:29.561 07:02:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.561 07:02:50 -- common/autotest_common.sh@10 -- # set +x 00:22:29.561 true 00:22:29.561 07:02:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.561 07:02:50 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:29.561 07:02:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.561 07:02:50 -- common/autotest_common.sh@10 -- # set +x 00:22:29.561 true 00:22:29.561 07:02:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.561 07:02:50 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:29.561 07:02:50 -- target/initiator_timeout.sh@54 -- # wait 1416779 00:23:25.798 00:23:25.798 job0: (groupid=0, jobs=1): err= 0: pid=1416935: Sun Dec 15 07:03:45 2024 00:23:25.798 read: IOPS=1254, BW=5018KiB/s (5138kB/s)(294MiB/60000msec) 00:23:25.798 slat (nsec): min=8297, max=51501, avg=9174.00, stdev=1051.90 00:23:25.798 clat (usec): min=72, max=310, avg=105.64, stdev= 6.99 00:23:25.798 lat (usec): min=95, max=319, avg=114.81, stdev= 7.06 00:23:25.798 clat percentiles (usec): 00:23:25.798 | 1.00th=[ 92], 5.00th=[ 96], 10.00th=[ 98], 20.00th=[ 100], 00:23:25.798 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 105], 60.00th=[ 108], 00:23:25.798 | 70.00th=[ 110], 80.00th=[ 112], 90.00th=[ 115], 95.00th=[ 118], 00:23:25.798 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 133], 99.95th=[ 143], 00:23:25.798 | 99.99th=[ 273] 00:23:25.798 write: IOPS=1256, BW=5028KiB/s (5148kB/s)(295MiB/60000msec); 0 zone resets 00:23:25.798 slat (usec): min=6, max=15714, avg=12.28, stdev=78.86 00:23:25.798 clat (usec): min=67, max=42291k, avg=663.46, stdev=153999.76 00:23:25.798 lat (usec): min=90, max=42291k, avg=675.74, stdev=153999.78 00:23:25.798 clat percentiles (usec): 00:23:25.798 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 97], 00:23:25.798 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 102], 60.00th=[ 104], 00:23:25.798 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 115], 00:23:25.798 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 135], 00:23:25.798 | 99.99th=[ 281] 00:23:25.798 bw ( KiB/s): min= 2856, max=19384, per=100.00%, avg=16375.78, stdev=3145.03, samples=36 00:23:25.798 iops : min= 714, max= 4846, avg=4094.00, stdev=786.28, samples=36 00:23:25.798 lat (usec) : 100=27.61%, 250=72.37%, 500=0.01% 00:23:25.798 lat (msec) : >=2000=0.01% 00:23:25.798 cpu : usr=2.07%, sys=3.30%, ctx=150688, majf=0, minf=144 00:23:25.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:25.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:25.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:25.798 issued rwts: total=75264,75414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:25.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:25.798 00:23:25.798 Run status group 0 (all jobs): 00:23:25.798 READ: bw=5018KiB/s (5138kB/s), 5018KiB/s-5018KiB/s (5138kB/s-5138kB/s), io=294MiB (308MB), run=60000-60000msec 00:23:25.798 WRITE: bw=5028KiB/s (5148kB/s), 5028KiB/s-5028KiB/s (5148kB/s-5148kB/s), io=295MiB (309MB), run=60000-60000msec 00:23:25.798 00:23:25.798 Disk stats (read/write): 00:23:25.798 nvme0n1: ios=75065/75069, merge=0/0, ticks=7183/7072, in_queue=14255, util=99.85% 00:23:25.798 07:03:45 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:25.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:25.798 07:03:46 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:25.798 07:03:46 -- common/autotest_common.sh@1208 -- # local i=0 00:23:25.798 07:03:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:25.798 07:03:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:25.798 07:03:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:25.798 07:03:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:25.798 07:03:46 -- common/autotest_common.sh@1220 -- # return 0 00:23:25.798 07:03:46 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:25.798 07:03:46 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:25.798 nvmf hotplug test: fio successful as expected 00:23:25.798 07:03:46 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:25.798 07:03:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.798 07:03:46 -- common/autotest_common.sh@10 -- # set +x 00:23:25.798 07:03:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.798 07:03:46 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:25.798 07:03:46 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:25.798 07:03:46 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:25.798 07:03:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:25.798 07:03:46 -- nvmf/common.sh@116 -- # sync 00:23:25.798 07:03:46 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:25.798 07:03:46 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:25.798 07:03:46 -- nvmf/common.sh@119 -- # set +e 00:23:25.798 07:03:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:25.798 07:03:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:25.798 rmmod nvme_rdma 00:23:25.798 rmmod nvme_fabrics 00:23:25.798 07:03:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:25.798 07:03:46 -- nvmf/common.sh@123 -- # set -e 00:23:25.798 07:03:46 -- nvmf/common.sh@124 -- # return 0 00:23:25.798 07:03:46 -- nvmf/common.sh@477 -- # '[' -n 1416136 ']' 00:23:25.798 07:03:46 -- nvmf/common.sh@478 -- # killprocess 1416136 00:23:25.798 07:03:46 -- common/autotest_common.sh@936 -- # '[' -z 1416136 ']' 00:23:25.798 07:03:46 -- common/autotest_common.sh@940 -- # kill -0 1416136 00:23:25.798 07:03:46 -- common/autotest_common.sh@941 -- # uname 00:23:25.798 07:03:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:25.798 07:03:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1416136 00:23:25.798 07:03:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:25.798 07:03:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:25.798 07:03:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1416136' 00:23:25.798 killing process with pid 1416136 00:23:25.798 07:03:46 -- common/autotest_common.sh@955 -- # kill 1416136 00:23:25.798 07:03:46 -- common/autotest_common.sh@960 -- # wait 1416136 00:23:25.798 07:03:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:25.798 07:03:46 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:25.798 00:23:25.798 real 1m12.536s 00:23:25.798 user 4m33.617s 00:23:25.798 sys 0m7.495s 00:23:25.798 07:03:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:25.798 07:03:46 -- common/autotest_common.sh@10 -- # set +x 00:23:25.798 ************************************ 00:23:25.798 END TEST nvmf_initiator_timeout 00:23:25.798 ************************************ 00:23:25.798 07:03:46 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:25.798 07:03:46 -- nvmf/nvmf.sh@70 -- # '[' rdma = tcp ']' 00:23:25.798 07:03:46 -- nvmf/nvmf.sh@76 -- # [[ '' -eq 1 ]] 00:23:25.798 07:03:46 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:25.798 07:03:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:25.798 07:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:25.798 07:03:46 -- common/autotest_common.sh@10 -- # set +x 00:23:25.798 ************************************ 00:23:25.798 START TEST nvmf_shutdown 00:23:25.798 ************************************ 00:23:25.798 07:03:46 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:25.798 * Looking for test storage... 00:23:25.798 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:25.798 07:03:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:25.798 07:03:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:25.798 07:03:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:25.798 07:03:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:25.798 07:03:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:25.798 07:03:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:25.798 07:03:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:25.798 07:03:46 -- scripts/common.sh@335 -- # IFS=.-: 00:23:25.798 07:03:46 -- scripts/common.sh@335 -- # read -ra ver1 00:23:25.798 07:03:46 -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.798 07:03:46 -- scripts/common.sh@336 -- # read -ra ver2 00:23:25.798 07:03:46 -- scripts/common.sh@337 -- # local 'op=<' 00:23:25.798 07:03:46 -- scripts/common.sh@339 -- # ver1_l=2 00:23:25.798 07:03:46 -- scripts/common.sh@340 -- # ver2_l=1 00:23:25.798 07:03:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:25.798 07:03:46 -- scripts/common.sh@343 -- # case "$op" in 00:23:25.798 07:03:46 -- scripts/common.sh@344 -- # : 1 00:23:25.798 07:03:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:25.799 07:03:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.799 07:03:46 -- scripts/common.sh@364 -- # decimal 1 00:23:25.799 07:03:46 -- scripts/common.sh@352 -- # local d=1 00:23:25.799 07:03:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.799 07:03:46 -- scripts/common.sh@354 -- # echo 1 00:23:25.799 07:03:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:25.799 07:03:46 -- scripts/common.sh@365 -- # decimal 2 00:23:25.799 07:03:46 -- scripts/common.sh@352 -- # local d=2 00:23:25.799 07:03:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.799 07:03:46 -- scripts/common.sh@354 -- # echo 2 00:23:25.799 07:03:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:25.799 07:03:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:25.799 07:03:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:25.799 07:03:46 -- scripts/common.sh@367 -- # return 0 00:23:25.799 07:03:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.799 07:03:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:25.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.799 --rc genhtml_branch_coverage=1 00:23:25.799 --rc genhtml_function_coverage=1 00:23:25.799 --rc genhtml_legend=1 00:23:25.799 --rc geninfo_all_blocks=1 00:23:25.799 --rc geninfo_unexecuted_blocks=1 00:23:25.799 00:23:25.799 ' 00:23:25.799 07:03:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:25.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.799 --rc genhtml_branch_coverage=1 00:23:25.799 --rc genhtml_function_coverage=1 00:23:25.799 --rc genhtml_legend=1 00:23:25.799 --rc geninfo_all_blocks=1 00:23:25.799 --rc geninfo_unexecuted_blocks=1 00:23:25.799 00:23:25.799 ' 00:23:25.799 07:03:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:25.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.799 --rc genhtml_branch_coverage=1 00:23:25.799 --rc genhtml_function_coverage=1 00:23:25.799 --rc genhtml_legend=1 00:23:25.799 --rc geninfo_all_blocks=1 00:23:25.799 --rc geninfo_unexecuted_blocks=1 00:23:25.799 00:23:25.799 ' 00:23:25.799 07:03:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:25.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.799 --rc genhtml_branch_coverage=1 00:23:25.799 --rc genhtml_function_coverage=1 00:23:25.799 --rc genhtml_legend=1 00:23:25.799 --rc geninfo_all_blocks=1 00:23:25.799 --rc geninfo_unexecuted_blocks=1 00:23:25.799 00:23:25.799 ' 00:23:25.799 07:03:46 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.799 07:03:46 -- nvmf/common.sh@7 -- # uname -s 00:23:25.799 07:03:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.799 07:03:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.799 07:03:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.799 07:03:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.799 07:03:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.799 07:03:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.799 07:03:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.799 07:03:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.799 07:03:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.799 07:03:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.799 07:03:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:25.799 07:03:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:25.799 07:03:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.799 07:03:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.799 07:03:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.799 07:03:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:25.799 07:03:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.799 07:03:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.799 07:03:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.799 07:03:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.799 07:03:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.799 07:03:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.799 07:03:46 -- paths/export.sh@5 -- # export PATH 00:23:25.799 07:03:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.799 07:03:46 -- nvmf/common.sh@46 -- # : 0 00:23:25.799 07:03:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:25.799 07:03:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:25.799 07:03:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:25.799 07:03:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.799 07:03:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.799 07:03:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:25.799 07:03:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:25.799 07:03:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:25.799 07:03:46 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:25.799 07:03:46 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:25.799 07:03:46 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:25.799 07:03:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:25.799 07:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:25.799 07:03:46 -- common/autotest_common.sh@10 -- # set +x 00:23:25.799 ************************************ 00:23:25.799 START TEST nvmf_shutdown_tc1 00:23:25.799 ************************************ 00:23:25.799 07:03:46 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc1 00:23:25.799 07:03:46 -- target/shutdown.sh@74 -- # starttarget 00:23:25.799 07:03:46 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:25.799 07:03:46 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:25.799 07:03:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.799 07:03:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:25.799 07:03:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:25.799 07:03:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:25.799 07:03:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.799 07:03:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.799 07:03:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.799 07:03:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:25.799 07:03:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:25.799 07:03:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:25.799 07:03:46 -- common/autotest_common.sh@10 -- # set +x 00:23:32.372 07:03:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:32.372 07:03:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:32.372 07:03:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:32.372 07:03:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:32.372 07:03:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:32.372 07:03:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:32.372 07:03:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:32.372 07:03:53 -- nvmf/common.sh@294 -- # net_devs=() 00:23:32.372 07:03:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:32.372 07:03:53 -- nvmf/common.sh@295 -- # e810=() 00:23:32.372 07:03:53 -- nvmf/common.sh@295 -- # local -ga e810 00:23:32.372 07:03:53 -- nvmf/common.sh@296 -- # x722=() 00:23:32.372 07:03:53 -- nvmf/common.sh@296 -- # local -ga x722 00:23:32.372 07:03:53 -- nvmf/common.sh@297 -- # mlx=() 00:23:32.372 07:03:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:32.372 07:03:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.372 07:03:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.372 07:03:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.372 07:03:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.372 07:03:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.372 07:03:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.372 07:03:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.372 07:03:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.372 07:03:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.372 07:03:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.373 07:03:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.373 07:03:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:32.373 07:03:53 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:32.373 07:03:53 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:32.373 07:03:53 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:32.373 07:03:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:32.373 07:03:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:32.373 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:32.373 07:03:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:32.373 07:03:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:32.373 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:32.373 07:03:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:32.373 07:03:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:32.373 07:03:53 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.373 07:03:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:32.373 07:03:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.373 07:03:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:32.373 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:32.373 07:03:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.373 07:03:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.373 07:03:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:32.373 07:03:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.373 07:03:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:32.373 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:32.373 07:03:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.373 07:03:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:32.373 07:03:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:32.373 07:03:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:32.373 07:03:53 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:32.373 07:03:53 -- nvmf/common.sh@57 -- # uname 00:23:32.373 07:03:53 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:32.373 07:03:53 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:32.373 07:03:53 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:32.373 07:03:53 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:32.373 07:03:53 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:32.373 07:03:53 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:32.373 07:03:53 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:32.373 07:03:53 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:32.373 07:03:53 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:32.373 07:03:53 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:32.373 07:03:53 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:32.373 07:03:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:32.373 07:03:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:32.373 07:03:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:32.373 07:03:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:32.373 07:03:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:32.373 07:03:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:32.373 07:03:53 -- nvmf/common.sh@104 -- # continue 2 00:23:32.373 07:03:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:32.373 07:03:53 -- nvmf/common.sh@104 -- # continue 2 00:23:32.373 07:03:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:32.373 07:03:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:32.373 07:03:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:32.373 07:03:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:32.373 07:03:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:32.373 07:03:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:32.373 07:03:53 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:32.373 07:03:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:32.373 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:32.373 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:32.373 altname enp217s0f0np0 00:23:32.373 altname ens818f0np0 00:23:32.373 inet 192.168.100.8/24 scope global mlx_0_0 00:23:32.373 valid_lft forever preferred_lft forever 00:23:32.373 07:03:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:32.373 07:03:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:32.373 07:03:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:32.373 07:03:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:32.373 07:03:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:32.373 07:03:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:32.373 07:03:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:32.373 07:03:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:32.373 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:32.373 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:32.373 altname enp217s0f1np1 00:23:32.373 altname ens818f1np1 00:23:32.373 inet 192.168.100.9/24 scope global mlx_0_1 00:23:32.373 valid_lft forever preferred_lft forever 00:23:32.373 07:03:53 -- nvmf/common.sh@410 -- # return 0 00:23:32.373 07:03:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:32.373 07:03:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:32.373 07:03:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:32.373 07:03:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:32.373 07:03:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:32.373 07:03:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:32.373 07:03:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:32.373 07:03:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:32.373 07:03:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:32.373 07:03:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:32.373 07:03:53 -- nvmf/common.sh@104 -- # continue 2 00:23:32.373 07:03:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:32.373 07:03:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:32.373 07:03:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:32.373 07:03:53 -- nvmf/common.sh@104 -- # continue 2 00:23:32.373 07:03:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:32.373 07:03:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:32.373 07:03:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:32.373 07:03:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:32.373 07:03:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:32.373 07:03:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:32.373 07:03:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:32.373 07:03:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:32.373 07:03:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:32.373 07:03:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:32.373 07:03:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:32.373 07:03:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:32.373 07:03:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:32.373 192.168.100.9' 00:23:32.373 07:03:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:32.373 192.168.100.9' 00:23:32.373 07:03:53 -- nvmf/common.sh@445 -- # head -n 1 00:23:32.373 07:03:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:32.373 07:03:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:32.373 192.168.100.9' 00:23:32.373 07:03:53 -- nvmf/common.sh@446 -- # tail -n +2 00:23:32.373 07:03:53 -- nvmf/common.sh@446 -- # head -n 1 00:23:32.373 07:03:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:32.373 07:03:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:32.373 07:03:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:32.373 07:03:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:32.373 07:03:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:32.373 07:03:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:32.373 07:03:53 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:32.373 07:03:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:32.373 07:03:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:32.373 07:03:53 -- common/autotest_common.sh@10 -- # set +x 00:23:32.374 07:03:53 -- nvmf/common.sh@469 -- # nvmfpid=1430993 00:23:32.374 07:03:53 -- nvmf/common.sh@470 -- # waitforlisten 1430993 00:23:32.374 07:03:53 -- common/autotest_common.sh@829 -- # '[' -z 1430993 ']' 00:23:32.374 07:03:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.374 07:03:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.374 07:03:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.374 07:03:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.374 07:03:53 -- common/autotest_common.sh@10 -- # set +x 00:23:32.374 07:03:53 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:32.374 [2024-12-15 07:03:53.273906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:32.374 [2024-12-15 07:03:53.273957] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.374 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.374 [2024-12-15 07:03:53.344126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.374 [2024-12-15 07:03:53.381698] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:32.374 [2024-12-15 07:03:53.381805] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.374 [2024-12-15 07:03:53.381815] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.374 [2024-12-15 07:03:53.381824] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.374 [2024-12-15 07:03:53.381927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.374 [2024-12-15 07:03:53.382014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.374 [2024-12-15 07:03:53.382121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.374 [2024-12-15 07:03:53.382122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:32.633 07:03:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:32.633 07:03:54 -- common/autotest_common.sh@862 -- # return 0 00:23:32.633 07:03:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:32.633 07:03:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:32.633 07:03:54 -- common/autotest_common.sh@10 -- # set +x 00:23:32.633 07:03:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.633 07:03:54 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:32.633 07:03:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.633 07:03:54 -- common/autotest_common.sh@10 -- # set +x 00:23:32.633 [2024-12-15 07:03:54.171489] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13723c0/0x1376890) succeed. 00:23:32.633 [2024-12-15 07:03:54.180859] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1373960/0x13b7f30) succeed. 00:23:32.893 07:03:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.893 07:03:54 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:32.893 07:03:54 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:32.893 07:03:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:32.893 07:03:54 -- common/autotest_common.sh@10 -- # set +x 00:23:32.893 07:03:54 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:32.893 07:03:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.893 07:03:54 -- target/shutdown.sh@28 -- # cat 00:23:32.893 07:03:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.893 07:03:54 -- target/shutdown.sh@28 -- # cat 00:23:32.893 07:03:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.893 07:03:54 -- target/shutdown.sh@28 -- # cat 00:23:32.893 07:03:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.893 07:03:54 -- target/shutdown.sh@28 -- # cat 00:23:32.893 07:03:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.893 07:03:54 -- target/shutdown.sh@28 -- # cat 00:23:32.893 07:03:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.893 07:03:54 -- target/shutdown.sh@28 -- # cat 00:23:32.893 07:03:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.893 07:03:54 -- target/shutdown.sh@28 -- # cat 00:23:32.893 07:03:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.893 07:03:54 -- target/shutdown.sh@28 -- # cat 00:23:32.893 07:03:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.893 07:03:54 -- target/shutdown.sh@28 -- # cat 00:23:32.893 07:03:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:32.893 07:03:54 -- target/shutdown.sh@28 -- # cat 00:23:32.893 07:03:54 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:32.893 07:03:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.893 07:03:54 -- common/autotest_common.sh@10 -- # set +x 00:23:32.893 Malloc1 00:23:32.893 [2024-12-15 07:03:54.391277] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:32.893 Malloc2 00:23:32.893 Malloc3 00:23:32.893 Malloc4 00:23:33.153 Malloc5 00:23:33.153 Malloc6 00:23:33.153 Malloc7 00:23:33.153 Malloc8 00:23:33.153 Malloc9 00:23:33.153 Malloc10 00:23:33.413 07:03:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.413 07:03:54 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:33.413 07:03:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:33.413 07:03:54 -- common/autotest_common.sh@10 -- # set +x 00:23:33.413 07:03:54 -- target/shutdown.sh@78 -- # perfpid=1431311 00:23:33.413 07:03:54 -- target/shutdown.sh@79 -- # waitforlisten 1431311 /var/tmp/bdevperf.sock 00:23:33.413 07:03:54 -- common/autotest_common.sh@829 -- # '[' -z 1431311 ']' 00:23:33.413 07:03:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.413 07:03:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.413 07:03:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.413 07:03:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.413 07:03:54 -- common/autotest_common.sh@10 -- # set +x 00:23:33.413 07:03:54 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:33.413 07:03:54 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:33.413 07:03:54 -- nvmf/common.sh@520 -- # config=() 00:23:33.413 07:03:54 -- nvmf/common.sh@520 -- # local subsystem config 00:23:33.413 07:03:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.413 07:03:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.413 { 00:23:33.413 "params": { 00:23:33.413 "name": "Nvme$subsystem", 00:23:33.413 "trtype": "$TEST_TRANSPORT", 00:23:33.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.413 "adrfam": "ipv4", 00:23:33.413 "trsvcid": "$NVMF_PORT", 00:23:33.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.413 "hdgst": ${hdgst:-false}, 00:23:33.413 "ddgst": ${ddgst:-false} 00:23:33.413 }, 00:23:33.413 "method": "bdev_nvme_attach_controller" 00:23:33.413 } 00:23:33.413 EOF 00:23:33.413 )") 00:23:33.413 07:03:54 -- nvmf/common.sh@542 -- # cat 00:23:33.413 07:03:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.413 07:03:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.413 { 00:23:33.413 "params": { 00:23:33.413 "name": "Nvme$subsystem", 00:23:33.413 "trtype": "$TEST_TRANSPORT", 00:23:33.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.413 "adrfam": "ipv4", 00:23:33.413 "trsvcid": "$NVMF_PORT", 00:23:33.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.413 "hdgst": ${hdgst:-false}, 00:23:33.413 "ddgst": ${ddgst:-false} 00:23:33.413 }, 00:23:33.413 "method": "bdev_nvme_attach_controller" 00:23:33.413 } 00:23:33.413 EOF 00:23:33.413 )") 00:23:33.413 07:03:54 -- nvmf/common.sh@542 -- # cat 00:23:33.413 07:03:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.413 07:03:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.413 { 00:23:33.413 "params": { 00:23:33.413 "name": "Nvme$subsystem", 00:23:33.413 "trtype": "$TEST_TRANSPORT", 00:23:33.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.413 "adrfam": "ipv4", 00:23:33.413 "trsvcid": "$NVMF_PORT", 00:23:33.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.413 "hdgst": ${hdgst:-false}, 00:23:33.413 "ddgst": ${ddgst:-false} 00:23:33.413 }, 00:23:33.413 "method": "bdev_nvme_attach_controller" 00:23:33.413 } 00:23:33.413 EOF 00:23:33.413 )") 00:23:33.413 07:03:54 -- nvmf/common.sh@542 -- # cat 00:23:33.413 07:03:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.413 07:03:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.413 { 00:23:33.413 "params": { 00:23:33.413 "name": "Nvme$subsystem", 00:23:33.413 "trtype": "$TEST_TRANSPORT", 00:23:33.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.413 "adrfam": "ipv4", 00:23:33.413 "trsvcid": "$NVMF_PORT", 00:23:33.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.413 "hdgst": ${hdgst:-false}, 00:23:33.413 "ddgst": ${ddgst:-false} 00:23:33.413 }, 00:23:33.413 "method": "bdev_nvme_attach_controller" 00:23:33.413 } 00:23:33.413 EOF 00:23:33.413 )") 00:23:33.413 07:03:54 -- nvmf/common.sh@542 -- # cat 00:23:33.413 07:03:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.413 07:03:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.413 { 00:23:33.413 "params": { 00:23:33.414 "name": "Nvme$subsystem", 00:23:33.414 "trtype": "$TEST_TRANSPORT", 00:23:33.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "$NVMF_PORT", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.414 "hdgst": ${hdgst:-false}, 00:23:33.414 "ddgst": ${ddgst:-false} 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 } 00:23:33.414 EOF 00:23:33.414 )") 00:23:33.414 07:03:54 -- nvmf/common.sh@542 -- # cat 00:23:33.414 [2024-12-15 07:03:54.882728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:33.414 [2024-12-15 07:03:54.882784] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:33.414 07:03:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.414 07:03:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.414 { 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme$subsystem", 00:23:33.414 "trtype": "$TEST_TRANSPORT", 00:23:33.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "$NVMF_PORT", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.414 "hdgst": ${hdgst:-false}, 00:23:33.414 "ddgst": ${ddgst:-false} 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 } 00:23:33.414 EOF 00:23:33.414 )") 00:23:33.414 07:03:54 -- nvmf/common.sh@542 -- # cat 00:23:33.414 07:03:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.414 07:03:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.414 { 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme$subsystem", 00:23:33.414 "trtype": "$TEST_TRANSPORT", 00:23:33.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "$NVMF_PORT", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.414 "hdgst": ${hdgst:-false}, 00:23:33.414 "ddgst": ${ddgst:-false} 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 } 00:23:33.414 EOF 00:23:33.414 )") 00:23:33.414 07:03:54 -- nvmf/common.sh@542 -- # cat 00:23:33.414 07:03:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.414 07:03:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.414 { 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme$subsystem", 00:23:33.414 "trtype": "$TEST_TRANSPORT", 00:23:33.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "$NVMF_PORT", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.414 "hdgst": ${hdgst:-false}, 00:23:33.414 "ddgst": ${ddgst:-false} 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 } 00:23:33.414 EOF 00:23:33.414 )") 00:23:33.414 07:03:54 -- nvmf/common.sh@542 -- # cat 00:23:33.414 07:03:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.414 07:03:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.414 { 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme$subsystem", 00:23:33.414 "trtype": "$TEST_TRANSPORT", 00:23:33.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "$NVMF_PORT", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.414 "hdgst": ${hdgst:-false}, 00:23:33.414 "ddgst": ${ddgst:-false} 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 } 00:23:33.414 EOF 00:23:33.414 )") 00:23:33.414 07:03:54 -- nvmf/common.sh@542 -- # cat 00:23:33.414 07:03:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:33.414 07:03:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:33.414 { 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme$subsystem", 00:23:33.414 "trtype": "$TEST_TRANSPORT", 00:23:33.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "$NVMF_PORT", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.414 "hdgst": ${hdgst:-false}, 00:23:33.414 "ddgst": ${ddgst:-false} 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 } 00:23:33.414 EOF 00:23:33.414 )") 00:23:33.414 07:03:54 -- nvmf/common.sh@542 -- # cat 00:23:33.414 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.414 07:03:54 -- nvmf/common.sh@544 -- # jq . 00:23:33.414 07:03:54 -- nvmf/common.sh@545 -- # IFS=, 00:23:33.414 07:03:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme1", 00:23:33.414 "trtype": "rdma", 00:23:33.414 "traddr": "192.168.100.8", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "4420", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.414 "hdgst": false, 00:23:33.414 "ddgst": false 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 },{ 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme2", 00:23:33.414 "trtype": "rdma", 00:23:33.414 "traddr": "192.168.100.8", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "4420", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:33.414 "hdgst": false, 00:23:33.414 "ddgst": false 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 },{ 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme3", 00:23:33.414 "trtype": "rdma", 00:23:33.414 "traddr": "192.168.100.8", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "4420", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:33.414 "hdgst": false, 00:23:33.414 "ddgst": false 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 },{ 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme4", 00:23:33.414 "trtype": "rdma", 00:23:33.414 "traddr": "192.168.100.8", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "4420", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:33.414 "hdgst": false, 00:23:33.414 "ddgst": false 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 },{ 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme5", 00:23:33.414 "trtype": "rdma", 00:23:33.414 "traddr": "192.168.100.8", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "4420", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:33.414 "hdgst": false, 00:23:33.414 "ddgst": false 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 },{ 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme6", 00:23:33.414 "trtype": "rdma", 00:23:33.414 "traddr": "192.168.100.8", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "4420", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:33.414 "hdgst": false, 00:23:33.414 "ddgst": false 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 },{ 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme7", 00:23:33.414 "trtype": "rdma", 00:23:33.414 "traddr": "192.168.100.8", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "4420", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:33.414 "hdgst": false, 00:23:33.414 "ddgst": false 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 },{ 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme8", 00:23:33.414 "trtype": "rdma", 00:23:33.414 "traddr": "192.168.100.8", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "4420", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:33.414 "hdgst": false, 00:23:33.414 "ddgst": false 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 },{ 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme9", 00:23:33.414 "trtype": "rdma", 00:23:33.414 "traddr": "192.168.100.8", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "4420", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:33.414 "hdgst": false, 00:23:33.414 "ddgst": false 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 },{ 00:23:33.414 "params": { 00:23:33.414 "name": "Nvme10", 00:23:33.414 "trtype": "rdma", 00:23:33.414 "traddr": "192.168.100.8", 00:23:33.414 "adrfam": "ipv4", 00:23:33.414 "trsvcid": "4420", 00:23:33.414 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:33.414 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:33.414 "hdgst": false, 00:23:33.414 "ddgst": false 00:23:33.414 }, 00:23:33.414 "method": "bdev_nvme_attach_controller" 00:23:33.414 }' 00:23:33.414 [2024-12-15 07:03:54.958765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.415 [2024-12-15 07:03:54.994867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.793 07:03:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.793 07:03:56 -- common/autotest_common.sh@862 -- # return 0 00:23:34.793 07:03:56 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:34.793 07:03:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.793 07:03:56 -- common/autotest_common.sh@10 -- # set +x 00:23:34.793 07:03:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.793 07:03:56 -- target/shutdown.sh@83 -- # kill -9 1431311 00:23:34.793 07:03:56 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:34.793 07:03:56 -- target/shutdown.sh@87 -- # sleep 1 00:23:36.174 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1431311 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:36.174 07:03:57 -- target/shutdown.sh@88 -- # kill -0 1430993 00:23:36.174 07:03:57 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:36.174 07:03:57 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:36.174 07:03:57 -- nvmf/common.sh@520 -- # config=() 00:23:36.174 07:03:57 -- nvmf/common.sh@520 -- # local subsystem config 00:23:36.174 07:03:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:36.174 { 00:23:36.174 "params": { 00:23:36.174 "name": "Nvme$subsystem", 00:23:36.174 "trtype": "$TEST_TRANSPORT", 00:23:36.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.174 "adrfam": "ipv4", 00:23:36.174 "trsvcid": "$NVMF_PORT", 00:23:36.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.174 "hdgst": ${hdgst:-false}, 00:23:36.174 "ddgst": ${ddgst:-false} 00:23:36.174 }, 00:23:36.174 "method": "bdev_nvme_attach_controller" 00:23:36.174 } 00:23:36.174 EOF 00:23:36.174 )") 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # cat 00:23:36.174 07:03:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:36.174 { 00:23:36.174 "params": { 00:23:36.174 "name": "Nvme$subsystem", 00:23:36.174 "trtype": "$TEST_TRANSPORT", 00:23:36.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.174 "adrfam": "ipv4", 00:23:36.174 "trsvcid": "$NVMF_PORT", 00:23:36.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.174 "hdgst": ${hdgst:-false}, 00:23:36.174 "ddgst": ${ddgst:-false} 00:23:36.174 }, 00:23:36.174 "method": "bdev_nvme_attach_controller" 00:23:36.174 } 00:23:36.174 EOF 00:23:36.174 )") 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # cat 00:23:36.174 07:03:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:36.174 { 00:23:36.174 "params": { 00:23:36.174 "name": "Nvme$subsystem", 00:23:36.174 "trtype": "$TEST_TRANSPORT", 00:23:36.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.174 "adrfam": "ipv4", 00:23:36.174 "trsvcid": "$NVMF_PORT", 00:23:36.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.174 "hdgst": ${hdgst:-false}, 00:23:36.174 "ddgst": ${ddgst:-false} 00:23:36.174 }, 00:23:36.174 "method": "bdev_nvme_attach_controller" 00:23:36.174 } 00:23:36.174 EOF 00:23:36.174 )") 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # cat 00:23:36.174 07:03:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:36.174 { 00:23:36.174 "params": { 00:23:36.174 "name": "Nvme$subsystem", 00:23:36.174 "trtype": "$TEST_TRANSPORT", 00:23:36.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.174 "adrfam": "ipv4", 00:23:36.174 "trsvcid": "$NVMF_PORT", 00:23:36.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.174 "hdgst": ${hdgst:-false}, 00:23:36.174 "ddgst": ${ddgst:-false} 00:23:36.174 }, 00:23:36.174 "method": "bdev_nvme_attach_controller" 00:23:36.174 } 00:23:36.174 EOF 00:23:36.174 )") 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # cat 00:23:36.174 07:03:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:36.174 { 00:23:36.174 "params": { 00:23:36.174 "name": "Nvme$subsystem", 00:23:36.174 "trtype": "$TEST_TRANSPORT", 00:23:36.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.174 "adrfam": "ipv4", 00:23:36.174 "trsvcid": "$NVMF_PORT", 00:23:36.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.174 "hdgst": ${hdgst:-false}, 00:23:36.174 "ddgst": ${ddgst:-false} 00:23:36.174 }, 00:23:36.174 "method": "bdev_nvme_attach_controller" 00:23:36.174 } 00:23:36.174 EOF 00:23:36.174 )") 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # cat 00:23:36.174 07:03:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:36.174 { 00:23:36.174 "params": { 00:23:36.174 "name": "Nvme$subsystem", 00:23:36.174 "trtype": "$TEST_TRANSPORT", 00:23:36.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.174 "adrfam": "ipv4", 00:23:36.174 "trsvcid": "$NVMF_PORT", 00:23:36.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.174 "hdgst": ${hdgst:-false}, 00:23:36.174 "ddgst": ${ddgst:-false} 00:23:36.174 }, 00:23:36.174 "method": "bdev_nvme_attach_controller" 00:23:36.174 } 00:23:36.174 EOF 00:23:36.174 )") 00:23:36.174 [2024-12-15 07:03:57.447162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:36.174 [2024-12-15 07:03:57.447215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431870 ] 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # cat 00:23:36.174 07:03:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:36.174 { 00:23:36.174 "params": { 00:23:36.174 "name": "Nvme$subsystem", 00:23:36.174 "trtype": "$TEST_TRANSPORT", 00:23:36.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.174 "adrfam": "ipv4", 00:23:36.174 "trsvcid": "$NVMF_PORT", 00:23:36.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.174 "hdgst": ${hdgst:-false}, 00:23:36.174 "ddgst": ${ddgst:-false} 00:23:36.174 }, 00:23:36.174 "method": "bdev_nvme_attach_controller" 00:23:36.174 } 00:23:36.174 EOF 00:23:36.174 )") 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # cat 00:23:36.174 07:03:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:36.174 { 00:23:36.174 "params": { 00:23:36.174 "name": "Nvme$subsystem", 00:23:36.174 "trtype": "$TEST_TRANSPORT", 00:23:36.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.174 "adrfam": "ipv4", 00:23:36.174 "trsvcid": "$NVMF_PORT", 00:23:36.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.174 "hdgst": ${hdgst:-false}, 00:23:36.174 "ddgst": ${ddgst:-false} 00:23:36.174 }, 00:23:36.174 "method": "bdev_nvme_attach_controller" 00:23:36.174 } 00:23:36.174 EOF 00:23:36.174 )") 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # cat 00:23:36.174 07:03:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:36.174 { 00:23:36.174 "params": { 00:23:36.174 "name": "Nvme$subsystem", 00:23:36.174 "trtype": "$TEST_TRANSPORT", 00:23:36.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.174 "adrfam": "ipv4", 00:23:36.174 "trsvcid": "$NVMF_PORT", 00:23:36.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.174 "hdgst": ${hdgst:-false}, 00:23:36.174 "ddgst": ${ddgst:-false} 00:23:36.174 }, 00:23:36.174 "method": "bdev_nvme_attach_controller" 00:23:36.174 } 00:23:36.174 EOF 00:23:36.174 )") 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # cat 00:23:36.174 07:03:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:36.174 07:03:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:36.174 { 00:23:36.174 "params": { 00:23:36.174 "name": "Nvme$subsystem", 00:23:36.174 "trtype": "$TEST_TRANSPORT", 00:23:36.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.174 "adrfam": "ipv4", 00:23:36.174 "trsvcid": "$NVMF_PORT", 00:23:36.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.174 "hdgst": ${hdgst:-false}, 00:23:36.174 "ddgst": ${ddgst:-false} 00:23:36.174 }, 00:23:36.175 "method": "bdev_nvme_attach_controller" 00:23:36.175 } 00:23:36.175 EOF 00:23:36.175 )") 00:23:36.175 07:03:57 -- nvmf/common.sh@542 -- # cat 00:23:36.175 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.175 07:03:57 -- nvmf/common.sh@544 -- # jq . 00:23:36.175 07:03:57 -- nvmf/common.sh@545 -- # IFS=, 00:23:36.175 07:03:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:36.175 "params": { 00:23:36.175 "name": "Nvme1", 00:23:36.175 "trtype": "rdma", 00:23:36.175 "traddr": "192.168.100.8", 00:23:36.175 "adrfam": "ipv4", 00:23:36.175 "trsvcid": "4420", 00:23:36.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.175 "hdgst": false, 00:23:36.175 "ddgst": false 00:23:36.175 }, 00:23:36.175 "method": "bdev_nvme_attach_controller" 00:23:36.175 },{ 00:23:36.175 "params": { 00:23:36.175 "name": "Nvme2", 00:23:36.175 "trtype": "rdma", 00:23:36.175 "traddr": "192.168.100.8", 00:23:36.175 "adrfam": "ipv4", 00:23:36.175 "trsvcid": "4420", 00:23:36.175 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:36.175 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:36.175 "hdgst": false, 00:23:36.175 "ddgst": false 00:23:36.175 }, 00:23:36.175 "method": "bdev_nvme_attach_controller" 00:23:36.175 },{ 00:23:36.175 "params": { 00:23:36.175 "name": "Nvme3", 00:23:36.175 "trtype": "rdma", 00:23:36.175 "traddr": "192.168.100.8", 00:23:36.175 "adrfam": "ipv4", 00:23:36.175 "trsvcid": "4420", 00:23:36.175 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:36.175 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:36.175 "hdgst": false, 00:23:36.175 "ddgst": false 00:23:36.175 }, 00:23:36.175 "method": "bdev_nvme_attach_controller" 00:23:36.175 },{ 00:23:36.175 "params": { 00:23:36.175 "name": "Nvme4", 00:23:36.175 "trtype": "rdma", 00:23:36.175 "traddr": "192.168.100.8", 00:23:36.175 "adrfam": "ipv4", 00:23:36.175 "trsvcid": "4420", 00:23:36.175 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:36.175 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:36.175 "hdgst": false, 00:23:36.175 "ddgst": false 00:23:36.175 }, 00:23:36.175 "method": "bdev_nvme_attach_controller" 00:23:36.175 },{ 00:23:36.175 "params": { 00:23:36.175 "name": "Nvme5", 00:23:36.175 "trtype": "rdma", 00:23:36.175 "traddr": "192.168.100.8", 00:23:36.175 "adrfam": "ipv4", 00:23:36.175 "trsvcid": "4420", 00:23:36.175 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:36.175 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:36.175 "hdgst": false, 00:23:36.175 "ddgst": false 00:23:36.175 }, 00:23:36.175 "method": "bdev_nvme_attach_controller" 00:23:36.175 },{ 00:23:36.175 "params": { 00:23:36.175 "name": "Nvme6", 00:23:36.175 "trtype": "rdma", 00:23:36.175 "traddr": "192.168.100.8", 00:23:36.175 "adrfam": "ipv4", 00:23:36.175 "trsvcid": "4420", 00:23:36.175 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:36.175 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:36.175 "hdgst": false, 00:23:36.175 "ddgst": false 00:23:36.175 }, 00:23:36.175 "method": "bdev_nvme_attach_controller" 00:23:36.175 },{ 00:23:36.175 "params": { 00:23:36.175 "name": "Nvme7", 00:23:36.175 "trtype": "rdma", 00:23:36.175 "traddr": "192.168.100.8", 00:23:36.175 "adrfam": "ipv4", 00:23:36.175 "trsvcid": "4420", 00:23:36.175 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:36.175 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:36.175 "hdgst": false, 00:23:36.175 "ddgst": false 00:23:36.175 }, 00:23:36.175 "method": "bdev_nvme_attach_controller" 00:23:36.175 },{ 00:23:36.175 "params": { 00:23:36.175 "name": "Nvme8", 00:23:36.175 "trtype": "rdma", 00:23:36.175 "traddr": "192.168.100.8", 00:23:36.175 "adrfam": "ipv4", 00:23:36.175 "trsvcid": "4420", 00:23:36.175 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:36.175 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:36.175 "hdgst": false, 00:23:36.175 "ddgst": false 00:23:36.175 }, 00:23:36.175 "method": "bdev_nvme_attach_controller" 00:23:36.175 },{ 00:23:36.175 "params": { 00:23:36.175 "name": "Nvme9", 00:23:36.175 "trtype": "rdma", 00:23:36.175 "traddr": "192.168.100.8", 00:23:36.175 "adrfam": "ipv4", 00:23:36.175 "trsvcid": "4420", 00:23:36.175 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:36.175 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:36.175 "hdgst": false, 00:23:36.175 "ddgst": false 00:23:36.175 }, 00:23:36.175 "method": "bdev_nvme_attach_controller" 00:23:36.175 },{ 00:23:36.175 "params": { 00:23:36.175 "name": "Nvme10", 00:23:36.175 "trtype": "rdma", 00:23:36.175 "traddr": "192.168.100.8", 00:23:36.175 "adrfam": "ipv4", 00:23:36.175 "trsvcid": "4420", 00:23:36.175 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:36.175 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:36.175 "hdgst": false, 00:23:36.175 "ddgst": false 00:23:36.175 }, 00:23:36.175 "method": "bdev_nvme_attach_controller" 00:23:36.175 }' 00:23:36.175 [2024-12-15 07:03:57.521459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.175 [2024-12-15 07:03:57.558106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.115 Running I/O for 1 seconds... 00:23:38.053 00:23:38.053 Latency(us) 00:23:38.053 [2024-12-15T06:03:59.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.053 [2024-12-15T06:03:59.694Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.053 Verification LBA range: start 0x0 length 0x400 00:23:38.053 Nvme1n1 : 1.10 733.70 45.86 0.00 0.00 86198.71 7392.46 120795.96 00:23:38.053 [2024-12-15T06:03:59.694Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.053 Verification LBA range: start 0x0 length 0x400 00:23:38.053 Nvme2n1 : 1.10 746.61 46.66 0.00 0.00 84058.33 7654.60 75497.47 00:23:38.053 [2024-12-15T06:03:59.694Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.053 Verification LBA range: start 0x0 length 0x400 00:23:38.053 Nvme3n1 : 1.11 745.94 46.62 0.00 0.00 83603.82 7864.32 73819.75 00:23:38.053 [2024-12-15T06:03:59.694Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.053 Verification LBA range: start 0x0 length 0x400 00:23:38.053 Nvme4n1 : 1.11 745.27 46.58 0.00 0.00 83195.29 8074.04 72561.46 00:23:38.053 [2024-12-15T06:03:59.694Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.053 Verification LBA range: start 0x0 length 0x400 00:23:38.053 Nvme5n1 : 1.11 744.61 46.54 0.00 0.00 82811.61 8283.75 71303.17 00:23:38.053 [2024-12-15T06:03:59.694Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.053 Verification LBA range: start 0x0 length 0x400 00:23:38.053 Nvme6n1 : 1.11 743.94 46.50 0.00 0.00 82415.31 8441.04 70883.74 00:23:38.053 [2024-12-15T06:03:59.694Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.053 Verification LBA range: start 0x0 length 0x400 00:23:38.053 Nvme7n1 : 1.11 743.27 46.45 0.00 0.00 81991.37 8650.75 72142.03 00:23:38.053 [2024-12-15T06:03:59.694Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.053 Verification LBA range: start 0x0 length 0x400 00:23:38.053 Nvme8n1 : 1.11 742.61 46.41 0.00 0.00 81567.61 8860.47 73819.75 00:23:38.053 [2024-12-15T06:03:59.694Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.053 Verification LBA range: start 0x0 length 0x400 00:23:38.053 Nvme9n1 : 1.11 741.83 46.36 0.00 0.00 81149.26 9070.18 75497.47 00:23:38.053 [2024-12-15T06:03:59.694Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.053 Verification LBA range: start 0x0 length 0x400 00:23:38.053 Nvme10n1 : 1.11 548.06 34.25 0.00 0.00 109023.20 7654.60 335544.32 00:23:38.053 [2024-12-15T06:03:59.694Z] =================================================================================================================== 00:23:38.053 [2024-12-15T06:03:59.694Z] Total : 7235.84 452.24 0.00 0.00 84972.44 7392.46 335544.32 00:23:38.313 07:03:59 -- target/shutdown.sh@93 -- # stoptarget 00:23:38.313 07:03:59 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:38.313 07:03:59 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:38.313 07:03:59 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:38.313 07:03:59 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:38.313 07:03:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:38.313 07:03:59 -- nvmf/common.sh@116 -- # sync 00:23:38.313 07:03:59 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:38.313 07:03:59 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:38.313 07:03:59 -- nvmf/common.sh@119 -- # set +e 00:23:38.313 07:03:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:38.313 07:03:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:38.313 rmmod nvme_rdma 00:23:38.313 rmmod nvme_fabrics 00:23:38.313 07:03:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:38.313 07:03:59 -- nvmf/common.sh@123 -- # set -e 00:23:38.313 07:03:59 -- nvmf/common.sh@124 -- # return 0 00:23:38.313 07:03:59 -- nvmf/common.sh@477 -- # '[' -n 1430993 ']' 00:23:38.313 07:03:59 -- nvmf/common.sh@478 -- # killprocess 1430993 00:23:38.313 07:03:59 -- common/autotest_common.sh@936 -- # '[' -z 1430993 ']' 00:23:38.313 07:03:59 -- common/autotest_common.sh@940 -- # kill -0 1430993 00:23:38.313 07:03:59 -- common/autotest_common.sh@941 -- # uname 00:23:38.313 07:03:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:38.313 07:03:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1430993 00:23:38.313 07:03:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:38.573 07:03:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:38.573 07:03:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1430993' 00:23:38.573 killing process with pid 1430993 00:23:38.573 07:03:59 -- common/autotest_common.sh@955 -- # kill 1430993 00:23:38.573 07:03:59 -- common/autotest_common.sh@960 -- # wait 1430993 00:23:38.833 07:04:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:38.833 07:04:00 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:38.833 00:23:38.833 real 0m13.461s 00:23:38.833 user 0m33.068s 00:23:38.833 sys 0m5.861s 00:23:38.833 07:04:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:38.833 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:38.833 ************************************ 00:23:38.833 END TEST nvmf_shutdown_tc1 00:23:38.833 ************************************ 00:23:38.833 07:04:00 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:38.833 07:04:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:38.833 07:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:38.833 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:38.833 ************************************ 00:23:38.833 START TEST nvmf_shutdown_tc2 00:23:38.833 ************************************ 00:23:38.833 07:04:00 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc2 00:23:38.833 07:04:00 -- target/shutdown.sh@98 -- # starttarget 00:23:38.833 07:04:00 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:38.833 07:04:00 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:38.833 07:04:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.833 07:04:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:38.833 07:04:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:38.833 07:04:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:38.833 07:04:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.833 07:04:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.833 07:04:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.833 07:04:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:38.833 07:04:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:38.833 07:04:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:38.833 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:38.833 07:04:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:38.833 07:04:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:38.833 07:04:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:38.833 07:04:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:38.833 07:04:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:38.833 07:04:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:38.833 07:04:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:38.833 07:04:00 -- nvmf/common.sh@294 -- # net_devs=() 00:23:38.833 07:04:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:38.833 07:04:00 -- nvmf/common.sh@295 -- # e810=() 00:23:38.833 07:04:00 -- nvmf/common.sh@295 -- # local -ga e810 00:23:38.833 07:04:00 -- nvmf/common.sh@296 -- # x722=() 00:23:38.833 07:04:00 -- nvmf/common.sh@296 -- # local -ga x722 00:23:38.833 07:04:00 -- nvmf/common.sh@297 -- # mlx=() 00:23:38.833 07:04:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:38.833 07:04:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.833 07:04:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.833 07:04:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.833 07:04:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.833 07:04:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.833 07:04:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.833 07:04:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.833 07:04:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.833 07:04:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.833 07:04:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.093 07:04:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.093 07:04:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:39.093 07:04:00 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:39.093 07:04:00 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:39.093 07:04:00 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:39.093 07:04:00 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:39.093 07:04:00 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:39.093 07:04:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:39.093 07:04:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:39.093 07:04:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:39.093 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:39.093 07:04:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:39.093 07:04:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:39.093 07:04:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:39.093 07:04:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:39.093 07:04:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:39.093 07:04:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:39.093 07:04:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:39.093 07:04:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:39.093 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:39.094 07:04:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:39.094 07:04:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:39.094 07:04:00 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:39.094 07:04:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.094 07:04:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:39.094 07:04:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.094 07:04:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:39.094 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:39.094 07:04:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.094 07:04:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:39.094 07:04:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.094 07:04:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:39.094 07:04:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.094 07:04:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:39.094 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:39.094 07:04:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.094 07:04:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:39.094 07:04:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:39.094 07:04:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:39.094 07:04:00 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:39.094 07:04:00 -- nvmf/common.sh@57 -- # uname 00:23:39.094 07:04:00 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:39.094 07:04:00 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:39.094 07:04:00 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:39.094 07:04:00 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:39.094 07:04:00 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:39.094 07:04:00 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:39.094 07:04:00 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:39.094 07:04:00 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:39.094 07:04:00 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:39.094 07:04:00 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:39.094 07:04:00 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:39.094 07:04:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:39.094 07:04:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:39.094 07:04:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:39.094 07:04:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:39.094 07:04:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:39.094 07:04:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:39.094 07:04:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:39.094 07:04:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:39.094 07:04:00 -- nvmf/common.sh@104 -- # continue 2 00:23:39.094 07:04:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:39.094 07:04:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:39.094 07:04:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:39.094 07:04:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:39.094 07:04:00 -- nvmf/common.sh@104 -- # continue 2 00:23:39.094 07:04:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:39.094 07:04:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:39.094 07:04:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:39.094 07:04:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:39.094 07:04:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:39.094 07:04:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:39.094 07:04:00 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:39.094 07:04:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:39.094 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:39.094 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:39.094 altname enp217s0f0np0 00:23:39.094 altname ens818f0np0 00:23:39.094 inet 192.168.100.8/24 scope global mlx_0_0 00:23:39.094 valid_lft forever preferred_lft forever 00:23:39.094 07:04:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:39.094 07:04:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:39.094 07:04:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:39.094 07:04:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:39.094 07:04:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:39.094 07:04:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:39.094 07:04:00 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:39.094 07:04:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:39.094 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:39.094 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:39.094 altname enp217s0f1np1 00:23:39.094 altname ens818f1np1 00:23:39.094 inet 192.168.100.9/24 scope global mlx_0_1 00:23:39.094 valid_lft forever preferred_lft forever 00:23:39.094 07:04:00 -- nvmf/common.sh@410 -- # return 0 00:23:39.094 07:04:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:39.094 07:04:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:39.094 07:04:00 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:39.094 07:04:00 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:39.094 07:04:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:39.094 07:04:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:39.094 07:04:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:39.094 07:04:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:39.094 07:04:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:39.094 07:04:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:39.094 07:04:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:39.094 07:04:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:39.094 07:04:00 -- nvmf/common.sh@104 -- # continue 2 00:23:39.094 07:04:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:39.094 07:04:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:39.094 07:04:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:39.094 07:04:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:39.094 07:04:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:39.094 07:04:00 -- nvmf/common.sh@104 -- # continue 2 00:23:39.094 07:04:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:39.094 07:04:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:39.094 07:04:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:39.094 07:04:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:39.094 07:04:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:39.094 07:04:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:39.094 07:04:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:39.094 07:04:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:39.094 07:04:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:39.094 07:04:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:39.094 07:04:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:39.094 07:04:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:39.094 07:04:00 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:39.094 192.168.100.9' 00:23:39.094 07:04:00 -- nvmf/common.sh@445 -- # head -n 1 00:23:39.094 07:04:00 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:39.094 192.168.100.9' 00:23:39.094 07:04:00 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:39.094 07:04:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:39.094 192.168.100.9' 00:23:39.094 07:04:00 -- nvmf/common.sh@446 -- # tail -n +2 00:23:39.094 07:04:00 -- nvmf/common.sh@446 -- # head -n 1 00:23:39.094 07:04:00 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:39.094 07:04:00 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:39.094 07:04:00 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:39.094 07:04:00 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:39.094 07:04:00 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:39.094 07:04:00 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:39.094 07:04:00 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:39.094 07:04:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:39.094 07:04:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.094 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:39.094 07:04:00 -- nvmf/common.sh@469 -- # nvmfpid=1432509 00:23:39.094 07:04:00 -- nvmf/common.sh@470 -- # waitforlisten 1432509 00:23:39.094 07:04:00 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:39.094 07:04:00 -- common/autotest_common.sh@829 -- # '[' -z 1432509 ']' 00:23:39.094 07:04:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.094 07:04:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.094 07:04:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.094 07:04:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.094 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:39.354 [2024-12-15 07:04:00.759711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:39.354 [2024-12-15 07:04:00.759763] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.354 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.354 [2024-12-15 07:04:00.832130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.354 [2024-12-15 07:04:00.870242] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:39.354 [2024-12-15 07:04:00.870352] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.354 [2024-12-15 07:04:00.870362] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.354 [2024-12-15 07:04:00.870375] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.354 [2024-12-15 07:04:00.870480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.354 [2024-12-15 07:04:00.870562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.354 [2024-12-15 07:04:00.870672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.354 [2024-12-15 07:04:00.870674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:40.299 07:04:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.299 07:04:01 -- common/autotest_common.sh@862 -- # return 0 00:23:40.299 07:04:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:40.299 07:04:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.299 07:04:01 -- common/autotest_common.sh@10 -- # set +x 00:23:40.299 07:04:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.299 07:04:01 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:40.299 07:04:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.299 07:04:01 -- common/autotest_common.sh@10 -- # set +x 00:23:40.299 [2024-12-15 07:04:01.661374] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18e03c0/0x18e4890) succeed. 00:23:40.299 [2024-12-15 07:04:01.670700] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18e1960/0x1925f30) succeed. 00:23:40.299 07:04:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.299 07:04:01 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:40.299 07:04:01 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:40.299 07:04:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:40.299 07:04:01 -- common/autotest_common.sh@10 -- # set +x 00:23:40.299 07:04:01 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.299 07:04:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.299 07:04:01 -- target/shutdown.sh@28 -- # cat 00:23:40.299 07:04:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.299 07:04:01 -- target/shutdown.sh@28 -- # cat 00:23:40.299 07:04:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.299 07:04:01 -- target/shutdown.sh@28 -- # cat 00:23:40.299 07:04:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.299 07:04:01 -- target/shutdown.sh@28 -- # cat 00:23:40.299 07:04:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.299 07:04:01 -- target/shutdown.sh@28 -- # cat 00:23:40.299 07:04:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.299 07:04:01 -- target/shutdown.sh@28 -- # cat 00:23:40.299 07:04:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.299 07:04:01 -- target/shutdown.sh@28 -- # cat 00:23:40.299 07:04:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.299 07:04:01 -- target/shutdown.sh@28 -- # cat 00:23:40.300 07:04:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.300 07:04:01 -- target/shutdown.sh@28 -- # cat 00:23:40.300 07:04:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.300 07:04:01 -- target/shutdown.sh@28 -- # cat 00:23:40.300 07:04:01 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:40.300 07:04:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.300 07:04:01 -- common/autotest_common.sh@10 -- # set +x 00:23:40.300 Malloc1 00:23:40.300 [2024-12-15 07:04:01.891470] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:40.300 Malloc2 00:23:40.558 Malloc3 00:23:40.558 Malloc4 00:23:40.558 Malloc5 00:23:40.558 Malloc6 00:23:40.558 Malloc7 00:23:40.558 Malloc8 00:23:40.819 Malloc9 00:23:40.819 Malloc10 00:23:40.819 07:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.819 07:04:02 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:40.819 07:04:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.819 07:04:02 -- common/autotest_common.sh@10 -- # set +x 00:23:40.819 07:04:02 -- target/shutdown.sh@102 -- # perfpid=1432835 00:23:40.819 07:04:02 -- target/shutdown.sh@103 -- # waitforlisten 1432835 /var/tmp/bdevperf.sock 00:23:40.819 07:04:02 -- common/autotest_common.sh@829 -- # '[' -z 1432835 ']' 00:23:40.819 07:04:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.819 07:04:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.819 07:04:02 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:40.819 07:04:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.819 07:04:02 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:40.819 07:04:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.819 07:04:02 -- common/autotest_common.sh@10 -- # set +x 00:23:40.819 07:04:02 -- nvmf/common.sh@520 -- # config=() 00:23:40.819 07:04:02 -- nvmf/common.sh@520 -- # local subsystem config 00:23:40.819 07:04:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.819 { 00:23:40.819 "params": { 00:23:40.819 "name": "Nvme$subsystem", 00:23:40.819 "trtype": "$TEST_TRANSPORT", 00:23:40.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.819 "adrfam": "ipv4", 00:23:40.819 "trsvcid": "$NVMF_PORT", 00:23:40.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.819 "hdgst": ${hdgst:-false}, 00:23:40.819 "ddgst": ${ddgst:-false} 00:23:40.819 }, 00:23:40.819 "method": "bdev_nvme_attach_controller" 00:23:40.819 } 00:23:40.819 EOF 00:23:40.819 )") 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # cat 00:23:40.819 07:04:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.819 { 00:23:40.819 "params": { 00:23:40.819 "name": "Nvme$subsystem", 00:23:40.819 "trtype": "$TEST_TRANSPORT", 00:23:40.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.819 "adrfam": "ipv4", 00:23:40.819 "trsvcid": "$NVMF_PORT", 00:23:40.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.819 "hdgst": ${hdgst:-false}, 00:23:40.819 "ddgst": ${ddgst:-false} 00:23:40.819 }, 00:23:40.819 "method": "bdev_nvme_attach_controller" 00:23:40.819 } 00:23:40.819 EOF 00:23:40.819 )") 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # cat 00:23:40.819 07:04:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.819 { 00:23:40.819 "params": { 00:23:40.819 "name": "Nvme$subsystem", 00:23:40.819 "trtype": "$TEST_TRANSPORT", 00:23:40.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.819 "adrfam": "ipv4", 00:23:40.819 "trsvcid": "$NVMF_PORT", 00:23:40.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.819 "hdgst": ${hdgst:-false}, 00:23:40.819 "ddgst": ${ddgst:-false} 00:23:40.819 }, 00:23:40.819 "method": "bdev_nvme_attach_controller" 00:23:40.819 } 00:23:40.819 EOF 00:23:40.819 )") 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # cat 00:23:40.819 07:04:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.819 { 00:23:40.819 "params": { 00:23:40.819 "name": "Nvme$subsystem", 00:23:40.819 "trtype": "$TEST_TRANSPORT", 00:23:40.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.819 "adrfam": "ipv4", 00:23:40.819 "trsvcid": "$NVMF_PORT", 00:23:40.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.819 "hdgst": ${hdgst:-false}, 00:23:40.819 "ddgst": ${ddgst:-false} 00:23:40.819 }, 00:23:40.819 "method": "bdev_nvme_attach_controller" 00:23:40.819 } 00:23:40.819 EOF 00:23:40.819 )") 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # cat 00:23:40.819 07:04:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.819 { 00:23:40.819 "params": { 00:23:40.819 "name": "Nvme$subsystem", 00:23:40.819 "trtype": "$TEST_TRANSPORT", 00:23:40.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.819 "adrfam": "ipv4", 00:23:40.819 "trsvcid": "$NVMF_PORT", 00:23:40.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.819 "hdgst": ${hdgst:-false}, 00:23:40.819 "ddgst": ${ddgst:-false} 00:23:40.819 }, 00:23:40.819 "method": "bdev_nvme_attach_controller" 00:23:40.819 } 00:23:40.819 EOF 00:23:40.819 )") 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # cat 00:23:40.819 [2024-12-15 07:04:02.383861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:40.819 [2024-12-15 07:04:02.383913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432835 ] 00:23:40.819 07:04:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.819 { 00:23:40.819 "params": { 00:23:40.819 "name": "Nvme$subsystem", 00:23:40.819 "trtype": "$TEST_TRANSPORT", 00:23:40.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.819 "adrfam": "ipv4", 00:23:40.819 "trsvcid": "$NVMF_PORT", 00:23:40.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.819 "hdgst": ${hdgst:-false}, 00:23:40.819 "ddgst": ${ddgst:-false} 00:23:40.819 }, 00:23:40.819 "method": "bdev_nvme_attach_controller" 00:23:40.819 } 00:23:40.819 EOF 00:23:40.819 )") 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # cat 00:23:40.819 07:04:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.819 { 00:23:40.819 "params": { 00:23:40.819 "name": "Nvme$subsystem", 00:23:40.819 "trtype": "$TEST_TRANSPORT", 00:23:40.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.819 "adrfam": "ipv4", 00:23:40.819 "trsvcid": "$NVMF_PORT", 00:23:40.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.819 "hdgst": ${hdgst:-false}, 00:23:40.819 "ddgst": ${ddgst:-false} 00:23:40.819 }, 00:23:40.819 "method": "bdev_nvme_attach_controller" 00:23:40.819 } 00:23:40.819 EOF 00:23:40.819 )") 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # cat 00:23:40.819 07:04:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.819 { 00:23:40.819 "params": { 00:23:40.819 "name": "Nvme$subsystem", 00:23:40.819 "trtype": "$TEST_TRANSPORT", 00:23:40.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.819 "adrfam": "ipv4", 00:23:40.819 "trsvcid": "$NVMF_PORT", 00:23:40.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.819 "hdgst": ${hdgst:-false}, 00:23:40.819 "ddgst": ${ddgst:-false} 00:23:40.819 }, 00:23:40.819 "method": "bdev_nvme_attach_controller" 00:23:40.819 } 00:23:40.819 EOF 00:23:40.819 )") 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # cat 00:23:40.819 07:04:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.819 07:04:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.819 { 00:23:40.820 "params": { 00:23:40.820 "name": "Nvme$subsystem", 00:23:40.820 "trtype": "$TEST_TRANSPORT", 00:23:40.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.820 "adrfam": "ipv4", 00:23:40.820 "trsvcid": "$NVMF_PORT", 00:23:40.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.820 "hdgst": ${hdgst:-false}, 00:23:40.820 "ddgst": ${ddgst:-false} 00:23:40.820 }, 00:23:40.820 "method": "bdev_nvme_attach_controller" 00:23:40.820 } 00:23:40.820 EOF 00:23:40.820 )") 00:23:40.820 07:04:02 -- nvmf/common.sh@542 -- # cat 00:23:40.820 07:04:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:40.820 07:04:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:40.820 { 00:23:40.820 "params": { 00:23:40.820 "name": "Nvme$subsystem", 00:23:40.820 "trtype": "$TEST_TRANSPORT", 00:23:40.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.820 "adrfam": "ipv4", 00:23:40.820 "trsvcid": "$NVMF_PORT", 00:23:40.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.820 "hdgst": ${hdgst:-false}, 00:23:40.820 "ddgst": ${ddgst:-false} 00:23:40.820 }, 00:23:40.820 "method": "bdev_nvme_attach_controller" 00:23:40.820 } 00:23:40.820 EOF 00:23:40.820 )") 00:23:40.820 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.820 07:04:02 -- nvmf/common.sh@542 -- # cat 00:23:40.820 07:04:02 -- nvmf/common.sh@544 -- # jq . 00:23:40.820 07:04:02 -- nvmf/common.sh@545 -- # IFS=, 00:23:40.820 07:04:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:40.820 "params": { 00:23:40.820 "name": "Nvme1", 00:23:40.820 "trtype": "rdma", 00:23:40.820 "traddr": "192.168.100.8", 00:23:40.820 "adrfam": "ipv4", 00:23:40.820 "trsvcid": "4420", 00:23:40.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.820 "hdgst": false, 00:23:40.820 "ddgst": false 00:23:40.820 }, 00:23:40.820 "method": "bdev_nvme_attach_controller" 00:23:40.820 },{ 00:23:40.820 "params": { 00:23:40.820 "name": "Nvme2", 00:23:40.820 "trtype": "rdma", 00:23:40.820 "traddr": "192.168.100.8", 00:23:40.820 "adrfam": "ipv4", 00:23:40.820 "trsvcid": "4420", 00:23:40.820 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.820 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:40.820 "hdgst": false, 00:23:40.820 "ddgst": false 00:23:40.820 }, 00:23:40.820 "method": "bdev_nvme_attach_controller" 00:23:40.820 },{ 00:23:40.820 "params": { 00:23:40.820 "name": "Nvme3", 00:23:40.820 "trtype": "rdma", 00:23:40.820 "traddr": "192.168.100.8", 00:23:40.820 "adrfam": "ipv4", 00:23:40.820 "trsvcid": "4420", 00:23:40.820 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:40.820 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:40.820 "hdgst": false, 00:23:40.820 "ddgst": false 00:23:40.820 }, 00:23:40.820 "method": "bdev_nvme_attach_controller" 00:23:40.820 },{ 00:23:40.820 "params": { 00:23:40.820 "name": "Nvme4", 00:23:40.820 "trtype": "rdma", 00:23:40.820 "traddr": "192.168.100.8", 00:23:40.820 "adrfam": "ipv4", 00:23:40.820 "trsvcid": "4420", 00:23:40.820 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:40.820 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:40.820 "hdgst": false, 00:23:40.820 "ddgst": false 00:23:40.820 }, 00:23:40.820 "method": "bdev_nvme_attach_controller" 00:23:40.820 },{ 00:23:40.820 "params": { 00:23:40.820 "name": "Nvme5", 00:23:40.820 "trtype": "rdma", 00:23:40.820 "traddr": "192.168.100.8", 00:23:40.820 "adrfam": "ipv4", 00:23:40.820 "trsvcid": "4420", 00:23:40.820 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:40.820 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:40.820 "hdgst": false, 00:23:40.820 "ddgst": false 00:23:40.820 }, 00:23:40.820 "method": "bdev_nvme_attach_controller" 00:23:40.820 },{ 00:23:40.820 "params": { 00:23:40.820 "name": "Nvme6", 00:23:40.820 "trtype": "rdma", 00:23:40.820 "traddr": "192.168.100.8", 00:23:40.820 "adrfam": "ipv4", 00:23:40.820 "trsvcid": "4420", 00:23:40.820 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:40.820 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:40.820 "hdgst": false, 00:23:40.820 "ddgst": false 00:23:40.820 }, 00:23:40.820 "method": "bdev_nvme_attach_controller" 00:23:40.820 },{ 00:23:40.820 "params": { 00:23:40.820 "name": "Nvme7", 00:23:40.820 "trtype": "rdma", 00:23:40.820 "traddr": "192.168.100.8", 00:23:40.820 "adrfam": "ipv4", 00:23:40.820 "trsvcid": "4420", 00:23:40.820 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:40.820 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:40.820 "hdgst": false, 00:23:40.820 "ddgst": false 00:23:40.820 }, 00:23:40.820 "method": "bdev_nvme_attach_controller" 00:23:40.820 },{ 00:23:40.820 "params": { 00:23:40.820 "name": "Nvme8", 00:23:40.820 "trtype": "rdma", 00:23:40.820 "traddr": "192.168.100.8", 00:23:40.820 "adrfam": "ipv4", 00:23:40.820 "trsvcid": "4420", 00:23:40.820 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:40.820 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:40.820 "hdgst": false, 00:23:40.820 "ddgst": false 00:23:40.820 }, 00:23:40.820 "method": "bdev_nvme_attach_controller" 00:23:40.820 },{ 00:23:40.820 "params": { 00:23:40.820 "name": "Nvme9", 00:23:40.820 "trtype": "rdma", 00:23:40.820 "traddr": "192.168.100.8", 00:23:40.820 "adrfam": "ipv4", 00:23:40.820 "trsvcid": "4420", 00:23:40.820 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:40.820 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:40.820 "hdgst": false, 00:23:40.820 "ddgst": false 00:23:40.820 }, 00:23:40.820 "method": "bdev_nvme_attach_controller" 00:23:40.820 },{ 00:23:40.820 "params": { 00:23:40.820 "name": "Nvme10", 00:23:40.820 "trtype": "rdma", 00:23:40.820 "traddr": "192.168.100.8", 00:23:40.820 "adrfam": "ipv4", 00:23:40.820 "trsvcid": "4420", 00:23:40.820 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:40.820 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:40.820 "hdgst": false, 00:23:40.820 "ddgst": false 00:23:40.820 }, 00:23:40.820 "method": "bdev_nvme_attach_controller" 00:23:40.820 }' 00:23:40.820 [2024-12-15 07:04:02.457576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.080 [2024-12-15 07:04:02.493634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.018 Running I/O for 10 seconds... 00:23:42.587 07:04:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.588 07:04:03 -- common/autotest_common.sh@862 -- # return 0 00:23:42.588 07:04:03 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:42.588 07:04:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.588 07:04:03 -- common/autotest_common.sh@10 -- # set +x 00:23:42.588 07:04:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.588 07:04:04 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:42.588 07:04:04 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:42.588 07:04:04 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:42.588 07:04:04 -- target/shutdown.sh@57 -- # local ret=1 00:23:42.588 07:04:04 -- target/shutdown.sh@58 -- # local i 00:23:42.588 07:04:04 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:42.588 07:04:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:42.588 07:04:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:42.588 07:04:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:42.588 07:04:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.588 07:04:04 -- common/autotest_common.sh@10 -- # set +x 00:23:42.588 07:04:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.588 07:04:04 -- target/shutdown.sh@60 -- # read_io_count=461 00:23:42.588 07:04:04 -- target/shutdown.sh@63 -- # '[' 461 -ge 100 ']' 00:23:42.588 07:04:04 -- target/shutdown.sh@64 -- # ret=0 00:23:42.588 07:04:04 -- target/shutdown.sh@65 -- # break 00:23:42.588 07:04:04 -- target/shutdown.sh@69 -- # return 0 00:23:42.588 07:04:04 -- target/shutdown.sh@109 -- # killprocess 1432835 00:23:42.588 07:04:04 -- common/autotest_common.sh@936 -- # '[' -z 1432835 ']' 00:23:42.588 07:04:04 -- common/autotest_common.sh@940 -- # kill -0 1432835 00:23:42.588 07:04:04 -- common/autotest_common.sh@941 -- # uname 00:23:42.588 07:04:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:42.588 07:04:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1432835 00:23:42.588 07:04:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:42.588 07:04:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:42.588 07:04:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1432835' 00:23:42.588 killing process with pid 1432835 00:23:42.588 07:04:04 -- common/autotest_common.sh@955 -- # kill 1432835 00:23:42.588 07:04:04 -- common/autotest_common.sh@960 -- # wait 1432835 00:23:42.847 Received shutdown signal, test time was about 0.926994 seconds 00:23:42.847 00:23:42.847 Latency(us) 00:23:42.847 [2024-12-15T06:04:04.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.847 [2024-12-15T06:04:04.488Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.847 Verification LBA range: start 0x0 length 0x400 00:23:42.847 Nvme1n1 : 0.92 714.96 44.69 0.00 0.00 88317.83 7602.18 112407.35 00:23:42.847 [2024-12-15T06:04:04.488Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.847 Verification LBA range: start 0x0 length 0x400 00:23:42.847 Nvme2n1 : 0.92 741.34 46.33 0.00 0.00 84445.59 7916.75 104018.74 00:23:42.847 [2024-12-15T06:04:04.488Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.847 Verification LBA range: start 0x0 length 0x400 00:23:42.847 Nvme3n1 : 0.92 752.53 47.03 0.00 0.00 82536.95 8074.04 74658.61 00:23:42.847 [2024-12-15T06:04:04.488Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.847 Verification LBA range: start 0x0 length 0x400 00:23:42.847 Nvme4n1 : 0.92 751.75 46.98 0.00 0.00 82062.74 8178.89 72142.03 00:23:42.847 [2024-12-15T06:04:04.488Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.847 Verification LBA range: start 0x0 length 0x400 00:23:42.847 Nvme5n1 : 0.92 750.97 46.94 0.00 0.00 81585.05 8336.18 70464.31 00:23:42.847 [2024-12-15T06:04:04.488Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.847 Verification LBA range: start 0x0 length 0x400 00:23:42.847 Nvme6n1 : 0.92 750.20 46.89 0.00 0.00 81082.50 8441.04 69625.45 00:23:42.847 [2024-12-15T06:04:04.488Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.848 Verification LBA range: start 0x0 length 0x400 00:23:42.848 Nvme7n1 : 0.92 749.42 46.84 0.00 0.00 80566.62 8598.32 70883.74 00:23:42.848 [2024-12-15T06:04:04.489Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.848 Verification LBA range: start 0x0 length 0x400 00:23:42.848 Nvme8n1 : 0.92 748.65 46.79 0.00 0.00 80072.42 8703.18 72561.46 00:23:42.848 [2024-12-15T06:04:04.489Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.848 Verification LBA range: start 0x0 length 0x400 00:23:42.848 Nvme9n1 : 0.93 659.30 41.21 0.00 0.00 90221.11 8808.04 159383.55 00:23:42.848 [2024-12-15T06:04:04.489Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.848 Verification LBA range: start 0x0 length 0x400 00:23:42.848 Nvme10n1 : 0.93 521.47 32.59 0.00 0.00 113248.32 7811.89 317089.38 00:23:42.848 [2024-12-15T06:04:04.489Z] =================================================================================================================== 00:23:42.848 [2024-12-15T06:04:04.489Z] Total : 7140.61 446.29 0.00 0.00 85507.28 7602.18 317089.38 00:23:43.107 07:04:04 -- target/shutdown.sh@112 -- # sleep 1 00:23:44.043 07:04:05 -- target/shutdown.sh@113 -- # kill -0 1432509 00:23:44.043 07:04:05 -- target/shutdown.sh@115 -- # stoptarget 00:23:44.043 07:04:05 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:44.043 07:04:05 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:44.043 07:04:05 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:44.043 07:04:05 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:44.043 07:04:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:44.043 07:04:05 -- nvmf/common.sh@116 -- # sync 00:23:44.043 07:04:05 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:44.043 07:04:05 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:44.043 07:04:05 -- nvmf/common.sh@119 -- # set +e 00:23:44.043 07:04:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:44.043 07:04:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:44.043 rmmod nvme_rdma 00:23:44.043 rmmod nvme_fabrics 00:23:44.043 07:04:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:44.043 07:04:05 -- nvmf/common.sh@123 -- # set -e 00:23:44.043 07:04:05 -- nvmf/common.sh@124 -- # return 0 00:23:44.043 07:04:05 -- nvmf/common.sh@477 -- # '[' -n 1432509 ']' 00:23:44.043 07:04:05 -- nvmf/common.sh@478 -- # killprocess 1432509 00:23:44.043 07:04:05 -- common/autotest_common.sh@936 -- # '[' -z 1432509 ']' 00:23:44.043 07:04:05 -- common/autotest_common.sh@940 -- # kill -0 1432509 00:23:44.043 07:04:05 -- common/autotest_common.sh@941 -- # uname 00:23:44.043 07:04:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:44.043 07:04:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1432509 00:23:44.302 07:04:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:44.302 07:04:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:44.302 07:04:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1432509' 00:23:44.302 killing process with pid 1432509 00:23:44.302 07:04:05 -- common/autotest_common.sh@955 -- # kill 1432509 00:23:44.302 07:04:05 -- common/autotest_common.sh@960 -- # wait 1432509 00:23:44.562 07:04:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:44.562 07:04:06 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:44.562 00:23:44.562 real 0m5.693s 00:23:44.562 user 0m23.143s 00:23:44.562 sys 0m1.212s 00:23:44.562 07:04:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:44.562 07:04:06 -- common/autotest_common.sh@10 -- # set +x 00:23:44.562 ************************************ 00:23:44.562 END TEST nvmf_shutdown_tc2 00:23:44.562 ************************************ 00:23:44.562 07:04:06 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:44.562 07:04:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:44.562 07:04:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:44.562 07:04:06 -- common/autotest_common.sh@10 -- # set +x 00:23:44.822 ************************************ 00:23:44.822 START TEST nvmf_shutdown_tc3 00:23:44.822 ************************************ 00:23:44.822 07:04:06 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc3 00:23:44.822 07:04:06 -- target/shutdown.sh@120 -- # starttarget 00:23:44.822 07:04:06 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:44.822 07:04:06 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:44.822 07:04:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.822 07:04:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:44.822 07:04:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:44.822 07:04:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:44.822 07:04:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.822 07:04:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.822 07:04:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.822 07:04:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:44.822 07:04:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:44.822 07:04:06 -- common/autotest_common.sh@10 -- # set +x 00:23:44.822 07:04:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:44.822 07:04:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:44.822 07:04:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:44.822 07:04:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:44.822 07:04:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:44.822 07:04:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:44.822 07:04:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:44.822 07:04:06 -- nvmf/common.sh@294 -- # net_devs=() 00:23:44.822 07:04:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:44.822 07:04:06 -- nvmf/common.sh@295 -- # e810=() 00:23:44.822 07:04:06 -- nvmf/common.sh@295 -- # local -ga e810 00:23:44.822 07:04:06 -- nvmf/common.sh@296 -- # x722=() 00:23:44.822 07:04:06 -- nvmf/common.sh@296 -- # local -ga x722 00:23:44.822 07:04:06 -- nvmf/common.sh@297 -- # mlx=() 00:23:44.822 07:04:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:44.822 07:04:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.822 07:04:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.822 07:04:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.822 07:04:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.822 07:04:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.822 07:04:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.822 07:04:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.822 07:04:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.822 07:04:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.822 07:04:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.822 07:04:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.822 07:04:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:44.822 07:04:06 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:44.822 07:04:06 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:44.822 07:04:06 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:44.822 07:04:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:44.822 07:04:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:44.822 07:04:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:44.822 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:44.822 07:04:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:44.822 07:04:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:44.822 07:04:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:44.822 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:44.822 07:04:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:44.822 07:04:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:44.822 07:04:06 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:44.822 07:04:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.822 07:04:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:44.822 07:04:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.822 07:04:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:44.822 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:44.822 07:04:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.822 07:04:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:44.822 07:04:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.822 07:04:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:44.822 07:04:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.822 07:04:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:44.822 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:44.822 07:04:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.822 07:04:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:44.822 07:04:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:44.822 07:04:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:44.822 07:04:06 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:44.822 07:04:06 -- nvmf/common.sh@57 -- # uname 00:23:44.822 07:04:06 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:44.822 07:04:06 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:44.822 07:04:06 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:44.822 07:04:06 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:44.822 07:04:06 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:44.822 07:04:06 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:44.822 07:04:06 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:44.822 07:04:06 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:44.822 07:04:06 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:44.822 07:04:06 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:44.822 07:04:06 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:44.822 07:04:06 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:44.822 07:04:06 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:44.822 07:04:06 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:44.822 07:04:06 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:44.822 07:04:06 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:44.822 07:04:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:44.822 07:04:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:44.822 07:04:06 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:44.822 07:04:06 -- nvmf/common.sh@104 -- # continue 2 00:23:44.822 07:04:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:44.822 07:04:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:44.822 07:04:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:44.822 07:04:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:44.822 07:04:06 -- nvmf/common.sh@104 -- # continue 2 00:23:44.822 07:04:06 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:44.822 07:04:06 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:44.822 07:04:06 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:44.822 07:04:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:44.822 07:04:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:44.822 07:04:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:44.822 07:04:06 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:44.822 07:04:06 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:44.822 07:04:06 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:44.822 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:44.822 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:44.822 altname enp217s0f0np0 00:23:44.823 altname ens818f0np0 00:23:44.823 inet 192.168.100.8/24 scope global mlx_0_0 00:23:44.823 valid_lft forever preferred_lft forever 00:23:44.823 07:04:06 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:44.823 07:04:06 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:44.823 07:04:06 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:44.823 07:04:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:44.823 07:04:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:44.823 07:04:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:44.823 07:04:06 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:44.823 07:04:06 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:44.823 07:04:06 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:44.823 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:44.823 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:44.823 altname enp217s0f1np1 00:23:44.823 altname ens818f1np1 00:23:44.823 inet 192.168.100.9/24 scope global mlx_0_1 00:23:44.823 valid_lft forever preferred_lft forever 00:23:44.823 07:04:06 -- nvmf/common.sh@410 -- # return 0 00:23:44.823 07:04:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:44.823 07:04:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:44.823 07:04:06 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:44.823 07:04:06 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:44.823 07:04:06 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:44.823 07:04:06 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:44.823 07:04:06 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:44.823 07:04:06 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:44.823 07:04:06 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:44.823 07:04:06 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:44.823 07:04:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:44.823 07:04:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:44.823 07:04:06 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:44.823 07:04:06 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:44.823 07:04:06 -- nvmf/common.sh@104 -- # continue 2 00:23:44.823 07:04:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:44.823 07:04:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:44.823 07:04:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:44.823 07:04:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:44.823 07:04:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:44.823 07:04:06 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:44.823 07:04:06 -- nvmf/common.sh@104 -- # continue 2 00:23:44.823 07:04:06 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:44.823 07:04:06 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:44.823 07:04:06 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:44.823 07:04:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:44.823 07:04:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:44.823 07:04:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:44.823 07:04:06 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:44.823 07:04:06 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:44.823 07:04:06 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:44.823 07:04:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:44.823 07:04:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:44.823 07:04:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:44.823 07:04:06 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:44.823 192.168.100.9' 00:23:44.823 07:04:06 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:44.823 192.168.100.9' 00:23:44.823 07:04:06 -- nvmf/common.sh@445 -- # head -n 1 00:23:44.823 07:04:06 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:44.823 07:04:06 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:44.823 192.168.100.9' 00:23:44.823 07:04:06 -- nvmf/common.sh@446 -- # tail -n +2 00:23:44.823 07:04:06 -- nvmf/common.sh@446 -- # head -n 1 00:23:44.823 07:04:06 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:44.823 07:04:06 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:44.823 07:04:06 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:44.823 07:04:06 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:44.823 07:04:06 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:44.823 07:04:06 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:44.823 07:04:06 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:44.823 07:04:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:44.823 07:04:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:44.823 07:04:06 -- common/autotest_common.sh@10 -- # set +x 00:23:44.823 07:04:06 -- nvmf/common.sh@469 -- # nvmfpid=1433638 00:23:44.823 07:04:06 -- nvmf/common.sh@470 -- # waitforlisten 1433638 00:23:44.823 07:04:06 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:44.823 07:04:06 -- common/autotest_common.sh@829 -- # '[' -z 1433638 ']' 00:23:44.823 07:04:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.823 07:04:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.823 07:04:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.823 07:04:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.823 07:04:06 -- common/autotest_common.sh@10 -- # set +x 00:23:45.082 [2024-12-15 07:04:06.493494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:45.082 [2024-12-15 07:04:06.493550] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.082 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.082 [2024-12-15 07:04:06.563857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:45.082 [2024-12-15 07:04:06.601719] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:45.082 [2024-12-15 07:04:06.601827] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.082 [2024-12-15 07:04:06.601837] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.082 [2024-12-15 07:04:06.601846] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.082 [2024-12-15 07:04:06.601952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.082 [2024-12-15 07:04:06.602038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:45.082 [2024-12-15 07:04:06.602149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.082 [2024-12-15 07:04:06.602150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:46.062 07:04:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.062 07:04:07 -- common/autotest_common.sh@862 -- # return 0 00:23:46.062 07:04:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:46.062 07:04:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:46.062 07:04:07 -- common/autotest_common.sh@10 -- # set +x 00:23:46.062 07:04:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.062 07:04:07 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:46.062 07:04:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.062 07:04:07 -- common/autotest_common.sh@10 -- # set +x 00:23:46.062 [2024-12-15 07:04:07.386631] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24043c0/0x2408890) succeed. 00:23:46.062 [2024-12-15 07:04:07.395802] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2405960/0x2449f30) succeed. 00:23:46.062 07:04:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.062 07:04:07 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:46.062 07:04:07 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:46.062 07:04:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.062 07:04:07 -- common/autotest_common.sh@10 -- # set +x 00:23:46.062 07:04:07 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:46.062 07:04:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.062 07:04:07 -- target/shutdown.sh@28 -- # cat 00:23:46.062 07:04:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.062 07:04:07 -- target/shutdown.sh@28 -- # cat 00:23:46.062 07:04:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.062 07:04:07 -- target/shutdown.sh@28 -- # cat 00:23:46.062 07:04:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.062 07:04:07 -- target/shutdown.sh@28 -- # cat 00:23:46.062 07:04:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.062 07:04:07 -- target/shutdown.sh@28 -- # cat 00:23:46.062 07:04:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.062 07:04:07 -- target/shutdown.sh@28 -- # cat 00:23:46.062 07:04:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.062 07:04:07 -- target/shutdown.sh@28 -- # cat 00:23:46.062 07:04:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.063 07:04:07 -- target/shutdown.sh@28 -- # cat 00:23:46.063 07:04:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.063 07:04:07 -- target/shutdown.sh@28 -- # cat 00:23:46.063 07:04:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:46.063 07:04:07 -- target/shutdown.sh@28 -- # cat 00:23:46.063 07:04:07 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:46.063 07:04:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.063 07:04:07 -- common/autotest_common.sh@10 -- # set +x 00:23:46.063 Malloc1 00:23:46.063 [2024-12-15 07:04:07.616574] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:46.063 Malloc2 00:23:46.063 Malloc3 00:23:46.368 Malloc4 00:23:46.368 Malloc5 00:23:46.368 Malloc6 00:23:46.368 Malloc7 00:23:46.368 Malloc8 00:23:46.368 Malloc9 00:23:46.368 Malloc10 00:23:46.629 07:04:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.629 07:04:08 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:46.629 07:04:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:46.629 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:23:46.629 07:04:08 -- target/shutdown.sh@124 -- # perfpid=1433971 00:23:46.629 07:04:08 -- target/shutdown.sh@125 -- # waitforlisten 1433971 /var/tmp/bdevperf.sock 00:23:46.629 07:04:08 -- common/autotest_common.sh@829 -- # '[' -z 1433971 ']' 00:23:46.629 07:04:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.629 07:04:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.629 07:04:08 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:46.629 07:04:08 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:46.629 07:04:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.629 07:04:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.629 07:04:08 -- nvmf/common.sh@520 -- # config=() 00:23:46.629 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:23:46.629 07:04:08 -- nvmf/common.sh@520 -- # local subsystem config 00:23:46.629 07:04:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:46.629 { 00:23:46.629 "params": { 00:23:46.629 "name": "Nvme$subsystem", 00:23:46.629 "trtype": "$TEST_TRANSPORT", 00:23:46.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.629 "adrfam": "ipv4", 00:23:46.629 "trsvcid": "$NVMF_PORT", 00:23:46.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.629 "hdgst": ${hdgst:-false}, 00:23:46.629 "ddgst": ${ddgst:-false} 00:23:46.629 }, 00:23:46.629 "method": "bdev_nvme_attach_controller" 00:23:46.629 } 00:23:46.629 EOF 00:23:46.629 )") 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # cat 00:23:46.629 07:04:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:46.629 { 00:23:46.629 "params": { 00:23:46.629 "name": "Nvme$subsystem", 00:23:46.629 "trtype": "$TEST_TRANSPORT", 00:23:46.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.629 "adrfam": "ipv4", 00:23:46.629 "trsvcid": "$NVMF_PORT", 00:23:46.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.629 "hdgst": ${hdgst:-false}, 00:23:46.629 "ddgst": ${ddgst:-false} 00:23:46.629 }, 00:23:46.629 "method": "bdev_nvme_attach_controller" 00:23:46.629 } 00:23:46.629 EOF 00:23:46.629 )") 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # cat 00:23:46.629 07:04:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:46.629 { 00:23:46.629 "params": { 00:23:46.629 "name": "Nvme$subsystem", 00:23:46.629 "trtype": "$TEST_TRANSPORT", 00:23:46.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.629 "adrfam": "ipv4", 00:23:46.629 "trsvcid": "$NVMF_PORT", 00:23:46.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.629 "hdgst": ${hdgst:-false}, 00:23:46.629 "ddgst": ${ddgst:-false} 00:23:46.629 }, 00:23:46.629 "method": "bdev_nvme_attach_controller" 00:23:46.629 } 00:23:46.629 EOF 00:23:46.629 )") 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # cat 00:23:46.629 07:04:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:46.629 { 00:23:46.629 "params": { 00:23:46.629 "name": "Nvme$subsystem", 00:23:46.629 "trtype": "$TEST_TRANSPORT", 00:23:46.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.629 "adrfam": "ipv4", 00:23:46.629 "trsvcid": "$NVMF_PORT", 00:23:46.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.629 "hdgst": ${hdgst:-false}, 00:23:46.629 "ddgst": ${ddgst:-false} 00:23:46.629 }, 00:23:46.629 "method": "bdev_nvme_attach_controller" 00:23:46.629 } 00:23:46.629 EOF 00:23:46.629 )") 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # cat 00:23:46.629 07:04:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:46.629 { 00:23:46.629 "params": { 00:23:46.629 "name": "Nvme$subsystem", 00:23:46.629 "trtype": "$TEST_TRANSPORT", 00:23:46.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.629 "adrfam": "ipv4", 00:23:46.629 "trsvcid": "$NVMF_PORT", 00:23:46.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.629 "hdgst": ${hdgst:-false}, 00:23:46.629 "ddgst": ${ddgst:-false} 00:23:46.629 }, 00:23:46.629 "method": "bdev_nvme_attach_controller" 00:23:46.629 } 00:23:46.629 EOF 00:23:46.629 )") 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # cat 00:23:46.629 07:04:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:46.629 { 00:23:46.629 "params": { 00:23:46.629 "name": "Nvme$subsystem", 00:23:46.629 "trtype": "$TEST_TRANSPORT", 00:23:46.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.629 "adrfam": "ipv4", 00:23:46.629 "trsvcid": "$NVMF_PORT", 00:23:46.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.629 "hdgst": ${hdgst:-false}, 00:23:46.629 "ddgst": ${ddgst:-false} 00:23:46.629 }, 00:23:46.629 "method": "bdev_nvme_attach_controller" 00:23:46.629 } 00:23:46.629 EOF 00:23:46.629 )") 00:23:46.629 [2024-12-15 07:04:08.111637] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:46.629 [2024-12-15 07:04:08.111690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433971 ] 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # cat 00:23:46.629 07:04:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:46.629 { 00:23:46.629 "params": { 00:23:46.629 "name": "Nvme$subsystem", 00:23:46.629 "trtype": "$TEST_TRANSPORT", 00:23:46.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.629 "adrfam": "ipv4", 00:23:46.629 "trsvcid": "$NVMF_PORT", 00:23:46.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.629 "hdgst": ${hdgst:-false}, 00:23:46.629 "ddgst": ${ddgst:-false} 00:23:46.629 }, 00:23:46.629 "method": "bdev_nvme_attach_controller" 00:23:46.629 } 00:23:46.629 EOF 00:23:46.629 )") 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # cat 00:23:46.629 07:04:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:46.629 { 00:23:46.629 "params": { 00:23:46.629 "name": "Nvme$subsystem", 00:23:46.629 "trtype": "$TEST_TRANSPORT", 00:23:46.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.629 "adrfam": "ipv4", 00:23:46.629 "trsvcid": "$NVMF_PORT", 00:23:46.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.629 "hdgst": ${hdgst:-false}, 00:23:46.629 "ddgst": ${ddgst:-false} 00:23:46.629 }, 00:23:46.629 "method": "bdev_nvme_attach_controller" 00:23:46.629 } 00:23:46.629 EOF 00:23:46.629 )") 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # cat 00:23:46.629 07:04:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:46.629 { 00:23:46.629 "params": { 00:23:46.629 "name": "Nvme$subsystem", 00:23:46.629 "trtype": "$TEST_TRANSPORT", 00:23:46.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.629 "adrfam": "ipv4", 00:23:46.629 "trsvcid": "$NVMF_PORT", 00:23:46.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.629 "hdgst": ${hdgst:-false}, 00:23:46.629 "ddgst": ${ddgst:-false} 00:23:46.629 }, 00:23:46.629 "method": "bdev_nvme_attach_controller" 00:23:46.629 } 00:23:46.629 EOF 00:23:46.629 )") 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # cat 00:23:46.629 07:04:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:46.629 07:04:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:46.629 { 00:23:46.629 "params": { 00:23:46.629 "name": "Nvme$subsystem", 00:23:46.630 "trtype": "$TEST_TRANSPORT", 00:23:46.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.630 "adrfam": "ipv4", 00:23:46.630 "trsvcid": "$NVMF_PORT", 00:23:46.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.630 "hdgst": ${hdgst:-false}, 00:23:46.630 "ddgst": ${ddgst:-false} 00:23:46.630 }, 00:23:46.630 "method": "bdev_nvme_attach_controller" 00:23:46.630 } 00:23:46.630 EOF 00:23:46.630 )") 00:23:46.630 07:04:08 -- nvmf/common.sh@542 -- # cat 00:23:46.630 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.630 07:04:08 -- nvmf/common.sh@544 -- # jq . 00:23:46.630 07:04:08 -- nvmf/common.sh@545 -- # IFS=, 00:23:46.630 07:04:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:46.630 "params": { 00:23:46.630 "name": "Nvme1", 00:23:46.630 "trtype": "rdma", 00:23:46.630 "traddr": "192.168.100.8", 00:23:46.630 "adrfam": "ipv4", 00:23:46.630 "trsvcid": "4420", 00:23:46.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.630 "hdgst": false, 00:23:46.630 "ddgst": false 00:23:46.630 }, 00:23:46.630 "method": "bdev_nvme_attach_controller" 00:23:46.630 },{ 00:23:46.630 "params": { 00:23:46.630 "name": "Nvme2", 00:23:46.630 "trtype": "rdma", 00:23:46.630 "traddr": "192.168.100.8", 00:23:46.630 "adrfam": "ipv4", 00:23:46.630 "trsvcid": "4420", 00:23:46.630 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:46.630 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:46.630 "hdgst": false, 00:23:46.630 "ddgst": false 00:23:46.630 }, 00:23:46.630 "method": "bdev_nvme_attach_controller" 00:23:46.630 },{ 00:23:46.630 "params": { 00:23:46.630 "name": "Nvme3", 00:23:46.630 "trtype": "rdma", 00:23:46.630 "traddr": "192.168.100.8", 00:23:46.630 "adrfam": "ipv4", 00:23:46.630 "trsvcid": "4420", 00:23:46.630 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:46.630 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:46.630 "hdgst": false, 00:23:46.630 "ddgst": false 00:23:46.630 }, 00:23:46.630 "method": "bdev_nvme_attach_controller" 00:23:46.630 },{ 00:23:46.630 "params": { 00:23:46.630 "name": "Nvme4", 00:23:46.630 "trtype": "rdma", 00:23:46.630 "traddr": "192.168.100.8", 00:23:46.630 "adrfam": "ipv4", 00:23:46.630 "trsvcid": "4420", 00:23:46.630 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:46.630 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:46.630 "hdgst": false, 00:23:46.630 "ddgst": false 00:23:46.630 }, 00:23:46.630 "method": "bdev_nvme_attach_controller" 00:23:46.630 },{ 00:23:46.630 "params": { 00:23:46.630 "name": "Nvme5", 00:23:46.630 "trtype": "rdma", 00:23:46.630 "traddr": "192.168.100.8", 00:23:46.630 "adrfam": "ipv4", 00:23:46.630 "trsvcid": "4420", 00:23:46.630 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:46.630 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:46.630 "hdgst": false, 00:23:46.630 "ddgst": false 00:23:46.630 }, 00:23:46.630 "method": "bdev_nvme_attach_controller" 00:23:46.630 },{ 00:23:46.630 "params": { 00:23:46.630 "name": "Nvme6", 00:23:46.630 "trtype": "rdma", 00:23:46.630 "traddr": "192.168.100.8", 00:23:46.630 "adrfam": "ipv4", 00:23:46.630 "trsvcid": "4420", 00:23:46.630 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:46.630 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:46.630 "hdgst": false, 00:23:46.630 "ddgst": false 00:23:46.630 }, 00:23:46.630 "method": "bdev_nvme_attach_controller" 00:23:46.630 },{ 00:23:46.630 "params": { 00:23:46.630 "name": "Nvme7", 00:23:46.630 "trtype": "rdma", 00:23:46.630 "traddr": "192.168.100.8", 00:23:46.630 "adrfam": "ipv4", 00:23:46.630 "trsvcid": "4420", 00:23:46.630 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:46.630 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:46.630 "hdgst": false, 00:23:46.630 "ddgst": false 00:23:46.630 }, 00:23:46.630 "method": "bdev_nvme_attach_controller" 00:23:46.630 },{ 00:23:46.630 "params": { 00:23:46.630 "name": "Nvme8", 00:23:46.630 "trtype": "rdma", 00:23:46.630 "traddr": "192.168.100.8", 00:23:46.630 "adrfam": "ipv4", 00:23:46.630 "trsvcid": "4420", 00:23:46.630 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:46.630 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:46.630 "hdgst": false, 00:23:46.630 "ddgst": false 00:23:46.630 }, 00:23:46.630 "method": "bdev_nvme_attach_controller" 00:23:46.630 },{ 00:23:46.630 "params": { 00:23:46.630 "name": "Nvme9", 00:23:46.630 "trtype": "rdma", 00:23:46.630 "traddr": "192.168.100.8", 00:23:46.630 "adrfam": "ipv4", 00:23:46.630 "trsvcid": "4420", 00:23:46.630 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:46.630 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:46.630 "hdgst": false, 00:23:46.630 "ddgst": false 00:23:46.630 }, 00:23:46.630 "method": "bdev_nvme_attach_controller" 00:23:46.630 },{ 00:23:46.630 "params": { 00:23:46.630 "name": "Nvme10", 00:23:46.630 "trtype": "rdma", 00:23:46.630 "traddr": "192.168.100.8", 00:23:46.630 "adrfam": "ipv4", 00:23:46.630 "trsvcid": "4420", 00:23:46.630 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:46.630 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:46.630 "hdgst": false, 00:23:46.630 "ddgst": false 00:23:46.630 }, 00:23:46.630 "method": "bdev_nvme_attach_controller" 00:23:46.630 }' 00:23:46.630 [2024-12-15 07:04:08.186878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.630 [2024-12-15 07:04:08.223097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.568 Running I/O for 10 seconds... 00:23:48.136 07:04:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.136 07:04:09 -- common/autotest_common.sh@862 -- # return 0 00:23:48.136 07:04:09 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:48.136 07:04:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.136 07:04:09 -- common/autotest_common.sh@10 -- # set +x 00:23:48.396 07:04:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.396 07:04:09 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.396 07:04:09 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:48.396 07:04:09 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:48.396 07:04:09 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:48.396 07:04:09 -- target/shutdown.sh@57 -- # local ret=1 00:23:48.396 07:04:09 -- target/shutdown.sh@58 -- # local i 00:23:48.396 07:04:09 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:48.396 07:04:09 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:48.396 07:04:09 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:48.396 07:04:09 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:48.396 07:04:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.396 07:04:09 -- common/autotest_common.sh@10 -- # set +x 00:23:48.396 07:04:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.396 07:04:09 -- target/shutdown.sh@60 -- # read_io_count=504 00:23:48.396 07:04:09 -- target/shutdown.sh@63 -- # '[' 504 -ge 100 ']' 00:23:48.396 07:04:09 -- target/shutdown.sh@64 -- # ret=0 00:23:48.396 07:04:09 -- target/shutdown.sh@65 -- # break 00:23:48.396 07:04:09 -- target/shutdown.sh@69 -- # return 0 00:23:48.396 07:04:09 -- target/shutdown.sh@134 -- # killprocess 1433638 00:23:48.396 07:04:09 -- common/autotest_common.sh@936 -- # '[' -z 1433638 ']' 00:23:48.396 07:04:09 -- common/autotest_common.sh@940 -- # kill -0 1433638 00:23:48.396 07:04:09 -- common/autotest_common.sh@941 -- # uname 00:23:48.396 07:04:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:48.396 07:04:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1433638 00:23:48.396 07:04:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:48.396 07:04:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:48.396 07:04:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1433638' 00:23:48.396 killing process with pid 1433638 00:23:48.397 07:04:09 -- common/autotest_common.sh@955 -- # kill 1433638 00:23:48.397 07:04:09 -- common/autotest_common.sh@960 -- # wait 1433638 00:23:48.965 07:04:10 -- target/shutdown.sh@135 -- # nvmfpid= 00:23:48.965 07:04:10 -- target/shutdown.sh@138 -- # sleep 1 00:23:49.537 [2024-12-15 07:04:11.027004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.027047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:146a p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.027061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.027071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:146a p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.027086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.027095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:146a p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.027104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.027113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:146a p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.029839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:49.537 [2024-12-15 07:04:11.029857] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:49.537 [2024-12-15 07:04:11.029883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.029892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:eedc p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.029902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.029911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:eedc p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.029920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.029929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:eedc p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.029938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.029947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:eedc p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.032338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:49.537 [2024-12-15 07:04:11.032381] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:49.537 [2024-12-15 07:04:11.032433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.032466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:ded6 p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.032497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.032527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:ded6 p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.032560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.032589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:ded6 p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.032621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.032651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:ded6 p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.035241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:49.537 [2024-12-15 07:04:11.035283] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:49.537 [2024-12-15 07:04:11.035340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.035373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:80b4 p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.035405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.035435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:80b4 p:0 m:0 dnr:0 00:23:49.537 [2024-12-15 07:04:11.035467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.537 [2024-12-15 07:04:11.035497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:80b4 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.035529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.035559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:80b4 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.038010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:49.538 [2024-12-15 07:04:11.038052] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.538 [2024-12-15 07:04:11.038100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.038134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:8b10 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.038166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.038196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:8b10 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.038227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.038258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:8b10 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.038290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.038319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:8b10 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.040698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:49.538 [2024-12-15 07:04:11.040739] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:49.538 [2024-12-15 07:04:11.040786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.040819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:c588 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.040852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.040881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:c588 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.040913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.040944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:c588 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.040997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.041029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:c588 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.043513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:49.538 [2024-12-15 07:04:11.043555] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:49.538 [2024-12-15 07:04:11.043601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.043633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:e084 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.043666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.043695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:e084 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.043727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.043757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:e084 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.043789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.043818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:e084 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.046163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:49.538 [2024-12-15 07:04:11.046205] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:49.538 [2024-12-15 07:04:11.046255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.046287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:e344 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.046320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.046350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:e344 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.046382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.046412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:e344 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.046443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.046473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:e344 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.048916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:49.538 [2024-12-15 07:04:11.048934] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:49.538 [2024-12-15 07:04:11.048955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.048968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:b8ac p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.048991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.049004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:b8ac p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.049017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.049030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:b8ac p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.049043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.049055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:b8ac p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.051058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:49.538 [2024-12-15 07:04:11.051098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:49.538 [2024-12-15 07:04:11.051144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.051177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:f202 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.051210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.051240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:f202 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.051272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.051301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:f202 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.051334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.538 [2024-12-15 07:04:11.051363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27732 cdw0:0 sqhd:f202 p:0 m:0 dnr:0 00:23:49.538 [2024-12-15 07:04:11.053995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:49.538 [2024-12-15 07:04:11.054039] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:49.538 [2024-12-15 07:04:11.056361] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257100 was disconnected and freed. reset controller. 00:23:49.538 [2024-12-15 07:04:11.056404] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.538 [2024-12-15 07:04:11.058871] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256ec0 was disconnected and freed. reset controller. 00:23:49.538 [2024-12-15 07:04:11.058907] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.538 [2024-12-15 07:04:11.060933] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256c80 was disconnected and freed. reset controller. 00:23:49.538 [2024-12-15 07:04:11.060953] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.538 [2024-12-15 07:04:11.062847] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a40 was disconnected and freed. reset controller. 00:23:49.538 [2024-12-15 07:04:11.062866] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.538 [2024-12-15 07:04:11.065121] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256800 was disconnected and freed. reset controller. 00:23:49.538 [2024-12-15 07:04:11.065143] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.538 [2024-12-15 07:04:11.067173] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192565c0 was disconnected and freed. reset controller. 00:23:49.538 [2024-12-15 07:04:11.067192] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.538 [2024-12-15 07:04:11.069161] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256380 was disconnected and freed. reset controller. 00:23:49.538 [2024-12-15 07:04:11.069180] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.538 [2024-12-15 07:04:11.071599] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256140 was disconnected and freed. reset controller. 00:23:49.538 [2024-12-15 07:04:11.071618] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.538 [2024-12-15 07:04:11.071729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183600 00:23:49.538 [2024-12-15 07:04:11.071745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.071768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183600 00:23:49.539 [2024-12-15 07:04:11.071782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.071800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183800 00:23:49.539 [2024-12-15 07:04:11.071813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.071831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x183800 00:23:49.539 [2024-12-15 07:04:11.071844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.071861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183800 00:23:49.539 [2024-12-15 07:04:11.071874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.071891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x183800 00:23:49.539 [2024-12-15 07:04:11.071905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.071922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183600 00:23:49.539 [2024-12-15 07:04:11.071935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.071952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183500 00:23:49.539 [2024-12-15 07:04:11.071965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.071993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183600 00:23:49.539 [2024-12-15 07:04:11.072007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.072025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183600 00:23:49.539 [2024-12-15 07:04:11.072037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.072055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183800 00:23:49.539 [2024-12-15 07:04:11.072068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.072085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183600 00:23:49.539 [2024-12-15 07:04:11.072098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.072115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x183800 00:23:49.539 [2024-12-15 07:04:11.072128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.072145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183600 00:23:49.539 [2024-12-15 07:04:11.072158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.072175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183800 00:23:49.539 [2024-12-15 07:04:11.072188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.072205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183f00 00:23:49.539 [2024-12-15 07:04:11.072218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.072236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183600 00:23:49.539 [2024-12-15 07:04:11.072251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.072268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183600 00:23:49.539 [2024-12-15 07:04:11.072282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.072300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183800 00:23:49.539 [2024-12-15 07:04:11.072314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.072333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183800 00:23:49.539 [2024-12-15 07:04:11.072346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.072364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x183800 00:23:49.539 [2024-12-15 07:04:11.072377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:3920 p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074275] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806c00 was disconnected and freed. reset controller. 00:23:49.539 [2024-12-15 07:04:11.074293] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.539 [2024-12-15 07:04:11.074416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183200 00:23:49.539 [2024-12-15 07:04:11.074432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183f00 00:23:49.539 [2024-12-15 07:04:11.074466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183200 00:23:49.539 [2024-12-15 07:04:11.074497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183f00 00:23:49.539 [2024-12-15 07:04:11.074529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184100 00:23:49.539 [2024-12-15 07:04:11.074559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184100 00:23:49.539 [2024-12-15 07:04:11.074590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184100 00:23:49.539 [2024-12-15 07:04:11.074620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184100 00:23:49.539 [2024-12-15 07:04:11.074651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183200 00:23:49.539 [2024-12-15 07:04:11.074684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183f00 00:23:49.539 [2024-12-15 07:04:11.074716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183200 00:23:49.539 [2024-12-15 07:04:11.074746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183200 00:23:49.539 [2024-12-15 07:04:11.074777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183200 00:23:49.539 [2024-12-15 07:04:11.074807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183200 00:23:49.539 [2024-12-15 07:04:11.074838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183200 00:23:49.539 [2024-12-15 07:04:11.074868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.539 [2024-12-15 07:04:11.074886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183200 00:23:49.539 [2024-12-15 07:04:11.074899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.074917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.074930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.074947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x183f00 00:23:49.540 [2024-12-15 07:04:11.074960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184100 00:23:49.540 [2024-12-15 07:04:11.075128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183f00 00:23:49.540 [2024-12-15 07:04:11.075188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184100 00:23:49.540 [2024-12-15 07:04:11.075219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184100 00:23:49.540 [2024-12-15 07:04:11.075341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184100 00:23:49.540 [2024-12-15 07:04:11.075370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184100 00:23:49.540 [2024-12-15 07:04:11.075463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184100 00:23:49.540 [2024-12-15 07:04:11.075643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183f00 00:23:49.540 [2024-12-15 07:04:11.075673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183200 00:23:49.540 [2024-12-15 07:04:11.075765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183f00 00:23:49.540 [2024-12-15 07:04:11.075797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000109fb000 len:0x10000 key:0x183e00 00:23:49.540 [2024-12-15 07:04:11.075828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bcc4000 len:0x10000 key:0x183e00 00:23:49.540 [2024-12-15 07:04:11.075861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bce5000 len:0x10000 key:0x183e00 00:23:49.540 [2024-12-15 07:04:11.075892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124aa000 len:0x10000 key:0x183e00 00:23:49.540 [2024-12-15 07:04:11.075923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b66000 len:0x10000 key:0x183e00 00:23:49.540 [2024-12-15 07:04:11.075954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.075972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b45000 len:0x10000 key:0x183e00 00:23:49.540 [2024-12-15 07:04:11.075993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.076011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001337d000 len:0x10000 key:0x183e00 00:23:49.540 [2024-12-15 07:04:11.076025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.076042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001335c000 len:0x10000 key:0x183e00 00:23:49.540 [2024-12-15 07:04:11.076055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.076073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x183e00 00:23:49.540 [2024-12-15 07:04:11.076086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.540 [2024-12-15 07:04:11.076103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x183e00 00:23:49.541 [2024-12-15 07:04:11.076116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.541 [2024-12-15 07:04:11.076140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x183e00 00:23:49.541 [2024-12-15 07:04:11.076153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.541 [2024-12-15 07:04:11.076171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x183e00 00:23:49.541 [2024-12-15 07:04:11.076184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.541 [2024-12-15 07:04:11.076202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x183e00 00:23:49.541 [2024-12-15 07:04:11.076214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.541 [2024-12-15 07:04:11.076232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf79000 len:0x10000 key:0x183e00 00:23:49.541 [2024-12-15 07:04:11.076245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.541 [2024-12-15 07:04:11.076263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf58000 len:0x10000 key:0x183e00 00:23:49.541 [2024-12-15 07:04:11.076276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.541 [2024-12-15 07:04:11.076293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf37000 len:0x10000 key:0x183e00 00:23:49.541 [2024-12-15 07:04:11.076306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.541 [2024-12-15 07:04:11.076324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bed4000 len:0x10000 key:0x183e00 00:23:49.541 [2024-12-15 07:04:11.076336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.541 [2024-12-15 07:04:11.076354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000beb3000 len:0x10000 key:0x183e00 00:23:49.541 [2024-12-15 07:04:11.076367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.541 [2024-12-15 07:04:11.076384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be92000 len:0x10000 key:0x183e00 00:23:49.541 [2024-12-15 07:04:11.076397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.541 [2024-12-15 07:04:11.076414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be71000 len:0x10000 key:0x183e00 00:23:49.541 [2024-12-15 07:04:11.076427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.541 [2024-12-15 07:04:11.076445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be50000 len:0x10000 key:0x183e00 00:23:49.541 [2024-12-15 07:04:11.076458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27732 cdw0:324b2000 sqhd:96ce p:0 m:0 dnr:0 00:23:49.541 [2024-12-15 07:04:11.095814] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8069c0 was disconnected and freed. reset controller. 00:23:49.541 [2024-12-15 07:04:11.095837] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.541 [2024-12-15 07:04:11.095889] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.541 [2024-12-15 07:04:11.095904] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.541 [2024-12-15 07:04:11.095917] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.541 [2024-12-15 07:04:11.095929] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.541 [2024-12-15 07:04:11.095941] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.541 [2024-12-15 07:04:11.095953] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.541 [2024-12-15 07:04:11.095966] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.541 [2024-12-15 07:04:11.095984] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.541 [2024-12-15 07:04:11.095997] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.541 [2024-12-15 07:04:11.096009] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:49.541 [2024-12-15 07:04:11.097561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.541 [2024-12-15 07:04:11.097581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:49.541 [2024-12-15 07:04:11.097592] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:49.541 [2024-12-15 07:04:11.097602] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:49.541 [2024-12-15 07:04:11.097612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:49.541 [2024-12-15 07:04:11.097922] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:49.541 [2024-12-15 07:04:11.097936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:49.541 [2024-12-15 07:04:11.097948] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:49.541 [2024-12-15 07:04:11.097958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:49.541 [2024-12-15 07:04:11.097968] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:49.541 task offset: 88576 on job bdev=Nvme1n1 fails 00:23:49.541 00:23:49.541 Latency(us) 00:23:49.541 [2024-12-15T06:04:11.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme1n1 ended in about 2.02 seconds with error 00:23:49.541 Verification LBA range: start 0x0 length 0x400 00:23:49.541 Nvme1n1 : 2.02 331.78 20.74 31.69 0.00 174541.19 39845.89 1040187.39 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme2n1 ended in about 2.02 seconds with error 00:23:49.541 Verification LBA range: start 0x0 length 0x400 00:23:49.541 Nvme2n1 : 2.02 331.53 20.72 31.67 0.00 173870.19 40684.75 1040187.39 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme3n1 ended in about 2.02 seconds with error 00:23:49.541 Verification LBA range: start 0x0 length 0x400 00:23:49.541 Nvme3n1 : 2.02 334.84 20.93 31.65 0.00 171667.32 40265.32 1040187.39 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme4n1 ended in about 2.02 seconds with error 00:23:49.541 Verification LBA range: start 0x0 length 0x400 00:23:49.541 Nvme4n1 : 2.02 349.02 21.81 31.64 0.00 164671.37 38377.88 1033476.51 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme5n1 ended in about 2.02 seconds with error 00:23:49.541 Verification LBA range: start 0x0 length 0x400 00:23:49.541 Nvme5n1 : 2.02 354.80 22.17 31.63 0.00 161526.43 37119.59 1033476.51 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme6n1 ended in about 2.02 seconds with error 00:23:49.541 Verification LBA range: start 0x0 length 0x400 00:23:49.541 Nvme6n1 : 2.02 354.63 22.16 31.61 0.00 161017.83 37958.45 1033476.51 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme7n1 ended in about 2.03 seconds with error 00:23:49.541 Verification LBA range: start 0x0 length 0x400 00:23:49.541 Nvme7n1 : 2.03 354.47 22.15 31.60 0.00 160495.82 38587.60 1033476.51 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme8n1 ended in about 2.03 seconds with error 00:23:49.541 Verification LBA range: start 0x0 length 0x400 00:23:49.541 Nvme8n1 : 2.03 354.30 22.14 31.58 0.00 159998.75 37958.45 1026765.62 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.541 [2024-12-15T06:04:11.182Z] Job: Nvme9n1 ended in about 2.01 seconds with error 00:23:49.541 Verification LBA range: start 0x0 length 0x400 00:23:49.541 Nvme9n1 : 2.01 248.98 15.56 31.81 0.00 219531.54 32925.29 1060320.05 00:23:49.541 [2024-12-15T06:04:11.183Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:49.542 [2024-12-15T06:04:11.183Z] Job: Nvme10n1 ended in about 1.99 seconds with error 00:23:49.542 Verification LBA range: start 0x0 length 0x400 00:23:49.542 Nvme10n1 : 1.99 251.53 15.72 32.13 0.00 218093.42 48234.50 1060320.05 00:23:49.542 [2024-12-15T06:04:11.183Z] =================================================================================================================== 00:23:49.542 [2024-12-15T06:04:11.183Z] Total : 3265.88 204.12 317.01 0.00 174084.68 32925.29 1060320.05 00:23:49.542 [2024-12-15 07:04:11.133237] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:49.542 [2024-12-15 07:04:11.139311] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:49.542 [2024-12-15 07:04:11.139369] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:49.542 [2024-12-15 07:04:11.139414] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6100 00:23:49.542 [2024-12-15 07:04:11.139535] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:49.542 [2024-12-15 07:04:11.139571] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:49.542 [2024-12-15 07:04:11.139596] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192dc7c0 00:23:49.542 [2024-12-15 07:04:11.139733] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:49.542 [2024-12-15 07:04:11.139767] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:49.542 [2024-12-15 07:04:11.139792] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba580 00:23:49.542 [2024-12-15 07:04:11.139925] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:49.542 [2024-12-15 07:04:11.139958] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:49.542 [2024-12-15 07:04:11.140028] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e53c0 00:23:49.542 [2024-12-15 07:04:11.140195] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:49.542 [2024-12-15 07:04:11.140229] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:49.542 [2024-12-15 07:04:11.140254] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:23:49.542 [2024-12-15 07:04:11.141671] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:49.542 [2024-12-15 07:04:11.141716] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:49.542 [2024-12-15 07:04:11.141740] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd540 00:23:49.542 [2024-12-15 07:04:11.141871] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:49.542 [2024-12-15 07:04:11.141905] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:49.542 [2024-12-15 07:04:11.141929] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a89c0 00:23:49.542 [2024-12-15 07:04:11.142076] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:49.542 [2024-12-15 07:04:11.142110] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:49.542 [2024-12-15 07:04:11.142134] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f500 00:23:49.542 [2024-12-15 07:04:11.142237] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:49.542 [2024-12-15 07:04:11.142271] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:49.542 [2024-12-15 07:04:11.142294] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c180 00:23:49.542 [2024-12-15 07:04:11.142419] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:49.542 [2024-12-15 07:04:11.142452] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:49.542 [2024-12-15 07:04:11.142475] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e180 00:23:50.110 07:04:11 -- target/shutdown.sh@141 -- # kill -9 1433971 00:23:50.110 07:04:11 -- target/shutdown.sh@143 -- # stoptarget 00:23:50.110 07:04:11 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:50.110 07:04:11 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:50.110 07:04:11 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:50.110 07:04:11 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:50.110 07:04:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:50.110 07:04:11 -- nvmf/common.sh@116 -- # sync 00:23:50.110 07:04:11 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:50.110 07:04:11 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:50.110 07:04:11 -- nvmf/common.sh@119 -- # set +e 00:23:50.110 07:04:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:50.110 07:04:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:50.110 rmmod nvme_rdma 00:23:50.110 rmmod nvme_fabrics 00:23:50.110 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 120: 1433971 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:23:50.110 07:04:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:50.110 07:04:11 -- nvmf/common.sh@123 -- # set -e 00:23:50.110 07:04:11 -- nvmf/common.sh@124 -- # return 0 00:23:50.110 07:04:11 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:23:50.110 07:04:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:50.110 07:04:11 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:50.110 00:23:50.110 real 0m5.335s 00:23:50.110 user 0m18.500s 00:23:50.110 sys 0m1.338s 00:23:50.110 07:04:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:50.110 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:50.110 ************************************ 00:23:50.110 END TEST nvmf_shutdown_tc3 00:23:50.110 ************************************ 00:23:50.110 07:04:11 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:23:50.110 00:23:50.110 real 0m24.868s 00:23:50.110 user 1m14.893s 00:23:50.110 sys 0m8.654s 00:23:50.110 07:04:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:50.110 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:50.110 ************************************ 00:23:50.110 END TEST nvmf_shutdown 00:23:50.110 ************************************ 00:23:50.110 07:04:11 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:50.110 07:04:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:50.110 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:50.110 07:04:11 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:50.111 07:04:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:50.111 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:50.111 07:04:11 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:50.111 07:04:11 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:23:50.111 07:04:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:50.111 07:04:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:50.111 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:50.111 ************************************ 00:23:50.111 START TEST nvmf_multicontroller 00:23:50.111 ************************************ 00:23:50.111 07:04:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:23:50.370 * Looking for test storage... 00:23:50.370 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:50.370 07:04:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:50.370 07:04:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:50.370 07:04:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:50.370 07:04:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:50.370 07:04:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:50.370 07:04:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:50.370 07:04:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:50.370 07:04:11 -- scripts/common.sh@335 -- # IFS=.-: 00:23:50.370 07:04:11 -- scripts/common.sh@335 -- # read -ra ver1 00:23:50.370 07:04:11 -- scripts/common.sh@336 -- # IFS=.-: 00:23:50.371 07:04:11 -- scripts/common.sh@336 -- # read -ra ver2 00:23:50.371 07:04:11 -- scripts/common.sh@337 -- # local 'op=<' 00:23:50.371 07:04:11 -- scripts/common.sh@339 -- # ver1_l=2 00:23:50.371 07:04:11 -- scripts/common.sh@340 -- # ver2_l=1 00:23:50.371 07:04:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:50.371 07:04:11 -- scripts/common.sh@343 -- # case "$op" in 00:23:50.371 07:04:11 -- scripts/common.sh@344 -- # : 1 00:23:50.371 07:04:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:50.371 07:04:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.371 07:04:11 -- scripts/common.sh@364 -- # decimal 1 00:23:50.371 07:04:11 -- scripts/common.sh@352 -- # local d=1 00:23:50.371 07:04:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:50.371 07:04:11 -- scripts/common.sh@354 -- # echo 1 00:23:50.371 07:04:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:50.371 07:04:11 -- scripts/common.sh@365 -- # decimal 2 00:23:50.371 07:04:11 -- scripts/common.sh@352 -- # local d=2 00:23:50.371 07:04:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:50.371 07:04:11 -- scripts/common.sh@354 -- # echo 2 00:23:50.371 07:04:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:50.371 07:04:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:50.371 07:04:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:50.371 07:04:11 -- scripts/common.sh@367 -- # return 0 00:23:50.371 07:04:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:50.371 07:04:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:50.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.371 --rc genhtml_branch_coverage=1 00:23:50.371 --rc genhtml_function_coverage=1 00:23:50.371 --rc genhtml_legend=1 00:23:50.371 --rc geninfo_all_blocks=1 00:23:50.371 --rc geninfo_unexecuted_blocks=1 00:23:50.371 00:23:50.371 ' 00:23:50.371 07:04:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:50.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.371 --rc genhtml_branch_coverage=1 00:23:50.371 --rc genhtml_function_coverage=1 00:23:50.371 --rc genhtml_legend=1 00:23:50.371 --rc geninfo_all_blocks=1 00:23:50.371 --rc geninfo_unexecuted_blocks=1 00:23:50.371 00:23:50.371 ' 00:23:50.371 07:04:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:50.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.371 --rc genhtml_branch_coverage=1 00:23:50.371 --rc genhtml_function_coverage=1 00:23:50.371 --rc genhtml_legend=1 00:23:50.371 --rc geninfo_all_blocks=1 00:23:50.371 --rc geninfo_unexecuted_blocks=1 00:23:50.371 00:23:50.371 ' 00:23:50.371 07:04:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:50.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.371 --rc genhtml_branch_coverage=1 00:23:50.371 --rc genhtml_function_coverage=1 00:23:50.371 --rc genhtml_legend=1 00:23:50.371 --rc geninfo_all_blocks=1 00:23:50.371 --rc geninfo_unexecuted_blocks=1 00:23:50.371 00:23:50.371 ' 00:23:50.371 07:04:11 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.371 07:04:11 -- nvmf/common.sh@7 -- # uname -s 00:23:50.371 07:04:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.371 07:04:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.371 07:04:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.371 07:04:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.371 07:04:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.371 07:04:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.371 07:04:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.371 07:04:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.371 07:04:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.371 07:04:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.371 07:04:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:50.371 07:04:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:50.371 07:04:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.371 07:04:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.371 07:04:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.371 07:04:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:50.371 07:04:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.371 07:04:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.371 07:04:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.371 07:04:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.371 07:04:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.371 07:04:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.371 07:04:11 -- paths/export.sh@5 -- # export PATH 00:23:50.371 07:04:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.371 07:04:11 -- nvmf/common.sh@46 -- # : 0 00:23:50.371 07:04:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:50.371 07:04:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:50.371 07:04:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:50.371 07:04:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.371 07:04:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.371 07:04:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:50.371 07:04:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:50.371 07:04:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:50.371 07:04:11 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:50.371 07:04:11 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:50.371 07:04:11 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:50.371 07:04:11 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:50.371 07:04:11 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.371 07:04:11 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:23:50.371 07:04:11 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:23:50.371 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:23:50.371 07:04:11 -- host/multicontroller.sh@20 -- # exit 0 00:23:50.371 00:23:50.371 real 0m0.218s 00:23:50.371 user 0m0.128s 00:23:50.371 sys 0m0.105s 00:23:50.371 07:04:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:50.371 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:50.371 ************************************ 00:23:50.371 END TEST nvmf_multicontroller 00:23:50.371 ************************************ 00:23:50.371 07:04:11 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:23:50.371 07:04:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:50.371 07:04:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:50.371 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:23:50.371 ************************************ 00:23:50.371 START TEST nvmf_aer 00:23:50.371 ************************************ 00:23:50.371 07:04:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:23:50.371 * Looking for test storage... 00:23:50.631 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:50.631 07:04:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:50.631 07:04:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:50.631 07:04:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:50.631 07:04:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:50.631 07:04:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:50.631 07:04:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:50.631 07:04:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:50.631 07:04:12 -- scripts/common.sh@335 -- # IFS=.-: 00:23:50.631 07:04:12 -- scripts/common.sh@335 -- # read -ra ver1 00:23:50.631 07:04:12 -- scripts/common.sh@336 -- # IFS=.-: 00:23:50.631 07:04:12 -- scripts/common.sh@336 -- # read -ra ver2 00:23:50.631 07:04:12 -- scripts/common.sh@337 -- # local 'op=<' 00:23:50.631 07:04:12 -- scripts/common.sh@339 -- # ver1_l=2 00:23:50.631 07:04:12 -- scripts/common.sh@340 -- # ver2_l=1 00:23:50.631 07:04:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:50.631 07:04:12 -- scripts/common.sh@343 -- # case "$op" in 00:23:50.631 07:04:12 -- scripts/common.sh@344 -- # : 1 00:23:50.631 07:04:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:50.631 07:04:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.631 07:04:12 -- scripts/common.sh@364 -- # decimal 1 00:23:50.631 07:04:12 -- scripts/common.sh@352 -- # local d=1 00:23:50.631 07:04:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:50.631 07:04:12 -- scripts/common.sh@354 -- # echo 1 00:23:50.631 07:04:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:50.631 07:04:12 -- scripts/common.sh@365 -- # decimal 2 00:23:50.631 07:04:12 -- scripts/common.sh@352 -- # local d=2 00:23:50.631 07:04:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:50.631 07:04:12 -- scripts/common.sh@354 -- # echo 2 00:23:50.631 07:04:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:50.631 07:04:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:50.631 07:04:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:50.631 07:04:12 -- scripts/common.sh@367 -- # return 0 00:23:50.631 07:04:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:50.631 07:04:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.631 --rc genhtml_branch_coverage=1 00:23:50.631 --rc genhtml_function_coverage=1 00:23:50.631 --rc genhtml_legend=1 00:23:50.631 --rc geninfo_all_blocks=1 00:23:50.631 --rc geninfo_unexecuted_blocks=1 00:23:50.631 00:23:50.631 ' 00:23:50.631 07:04:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.631 --rc genhtml_branch_coverage=1 00:23:50.631 --rc genhtml_function_coverage=1 00:23:50.631 --rc genhtml_legend=1 00:23:50.631 --rc geninfo_all_blocks=1 00:23:50.631 --rc geninfo_unexecuted_blocks=1 00:23:50.631 00:23:50.631 ' 00:23:50.631 07:04:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.631 --rc genhtml_branch_coverage=1 00:23:50.631 --rc genhtml_function_coverage=1 00:23:50.631 --rc genhtml_legend=1 00:23:50.631 --rc geninfo_all_blocks=1 00:23:50.631 --rc geninfo_unexecuted_blocks=1 00:23:50.631 00:23:50.631 ' 00:23:50.631 07:04:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.632 --rc genhtml_branch_coverage=1 00:23:50.632 --rc genhtml_function_coverage=1 00:23:50.632 --rc genhtml_legend=1 00:23:50.632 --rc geninfo_all_blocks=1 00:23:50.632 --rc geninfo_unexecuted_blocks=1 00:23:50.632 00:23:50.632 ' 00:23:50.632 07:04:12 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.632 07:04:12 -- nvmf/common.sh@7 -- # uname -s 00:23:50.632 07:04:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.632 07:04:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.632 07:04:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.632 07:04:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.632 07:04:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.632 07:04:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.632 07:04:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.632 07:04:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.632 07:04:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.632 07:04:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.632 07:04:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:50.632 07:04:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:50.632 07:04:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.632 07:04:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.632 07:04:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.632 07:04:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:50.632 07:04:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.632 07:04:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.632 07:04:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.632 07:04:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.632 07:04:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.632 07:04:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.632 07:04:12 -- paths/export.sh@5 -- # export PATH 00:23:50.632 07:04:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.632 07:04:12 -- nvmf/common.sh@46 -- # : 0 00:23:50.632 07:04:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:50.632 07:04:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:50.632 07:04:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:50.632 07:04:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.632 07:04:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.632 07:04:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:50.632 07:04:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:50.632 07:04:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:50.632 07:04:12 -- host/aer.sh@11 -- # nvmftestinit 00:23:50.632 07:04:12 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:50.632 07:04:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.632 07:04:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:50.632 07:04:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:50.632 07:04:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:50.632 07:04:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.632 07:04:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.632 07:04:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.632 07:04:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:50.632 07:04:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:50.632 07:04:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:50.632 07:04:12 -- common/autotest_common.sh@10 -- # set +x 00:23:57.208 07:04:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:57.208 07:04:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:57.208 07:04:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:57.208 07:04:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:57.208 07:04:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:57.208 07:04:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:57.208 07:04:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:57.208 07:04:18 -- nvmf/common.sh@294 -- # net_devs=() 00:23:57.208 07:04:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:57.208 07:04:18 -- nvmf/common.sh@295 -- # e810=() 00:23:57.208 07:04:18 -- nvmf/common.sh@295 -- # local -ga e810 00:23:57.208 07:04:18 -- nvmf/common.sh@296 -- # x722=() 00:23:57.208 07:04:18 -- nvmf/common.sh@296 -- # local -ga x722 00:23:57.208 07:04:18 -- nvmf/common.sh@297 -- # mlx=() 00:23:57.208 07:04:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:57.208 07:04:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.208 07:04:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.208 07:04:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.208 07:04:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.208 07:04:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.208 07:04:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.208 07:04:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.208 07:04:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.208 07:04:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.208 07:04:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.208 07:04:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.208 07:04:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:57.208 07:04:18 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:57.208 07:04:18 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:57.208 07:04:18 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:57.208 07:04:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:57.208 07:04:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:57.208 07:04:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:57.208 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:57.208 07:04:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:57.208 07:04:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:57.208 07:04:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:57.208 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:57.208 07:04:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:57.208 07:04:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:57.208 07:04:18 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:57.208 07:04:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.208 07:04:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:57.208 07:04:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.208 07:04:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:57.208 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:57.208 07:04:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.208 07:04:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:57.208 07:04:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.208 07:04:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:57.208 07:04:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.208 07:04:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:57.208 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:57.208 07:04:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.208 07:04:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:57.208 07:04:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:57.208 07:04:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:57.208 07:04:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:57.208 07:04:18 -- nvmf/common.sh@57 -- # uname 00:23:57.208 07:04:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:57.208 07:04:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:57.208 07:04:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:57.208 07:04:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:57.208 07:04:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:57.208 07:04:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:57.208 07:04:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:57.208 07:04:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:57.208 07:04:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:57.208 07:04:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:57.208 07:04:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:57.208 07:04:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:57.208 07:04:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:57.208 07:04:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:57.208 07:04:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:57.208 07:04:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:57.208 07:04:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:57.208 07:04:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:57.208 07:04:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:57.208 07:04:18 -- nvmf/common.sh@104 -- # continue 2 00:23:57.208 07:04:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:57.208 07:04:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:57.208 07:04:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:57.208 07:04:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:57.208 07:04:18 -- nvmf/common.sh@104 -- # continue 2 00:23:57.208 07:04:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:57.208 07:04:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:57.208 07:04:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:57.208 07:04:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:57.208 07:04:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:57.208 07:04:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:57.208 07:04:18 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:57.208 07:04:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:57.208 07:04:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:57.208 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:57.208 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:57.208 altname enp217s0f0np0 00:23:57.208 altname ens818f0np0 00:23:57.208 inet 192.168.100.8/24 scope global mlx_0_0 00:23:57.208 valid_lft forever preferred_lft forever 00:23:57.208 07:04:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:57.208 07:04:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:57.208 07:04:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:57.208 07:04:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:57.208 07:04:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:57.208 07:04:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:57.209 07:04:18 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:57.209 07:04:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:57.209 07:04:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:57.209 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:57.209 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:57.209 altname enp217s0f1np1 00:23:57.209 altname ens818f1np1 00:23:57.209 inet 192.168.100.9/24 scope global mlx_0_1 00:23:57.209 valid_lft forever preferred_lft forever 00:23:57.209 07:04:18 -- nvmf/common.sh@410 -- # return 0 00:23:57.209 07:04:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:57.209 07:04:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:57.209 07:04:18 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:57.209 07:04:18 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:57.209 07:04:18 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:57.209 07:04:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:57.209 07:04:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:57.209 07:04:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:57.209 07:04:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:57.209 07:04:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:57.209 07:04:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:57.209 07:04:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:57.209 07:04:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:57.209 07:04:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:57.209 07:04:18 -- nvmf/common.sh@104 -- # continue 2 00:23:57.209 07:04:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:57.209 07:04:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:57.209 07:04:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:57.209 07:04:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:57.209 07:04:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:57.209 07:04:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:57.209 07:04:18 -- nvmf/common.sh@104 -- # continue 2 00:23:57.209 07:04:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:57.209 07:04:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:57.209 07:04:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:57.209 07:04:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:57.209 07:04:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:57.209 07:04:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:57.209 07:04:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:57.209 07:04:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:57.209 07:04:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:57.209 07:04:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:57.209 07:04:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:57.209 07:04:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:57.209 07:04:18 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:57.209 192.168.100.9' 00:23:57.209 07:04:18 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:57.209 192.168.100.9' 00:23:57.209 07:04:18 -- nvmf/common.sh@445 -- # head -n 1 00:23:57.209 07:04:18 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:57.209 07:04:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:57.209 192.168.100.9' 00:23:57.209 07:04:18 -- nvmf/common.sh@446 -- # head -n 1 00:23:57.209 07:04:18 -- nvmf/common.sh@446 -- # tail -n +2 00:23:57.209 07:04:18 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:57.209 07:04:18 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:57.209 07:04:18 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:57.209 07:04:18 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:57.209 07:04:18 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:57.209 07:04:18 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:57.209 07:04:18 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:57.209 07:04:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:57.209 07:04:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.209 07:04:18 -- common/autotest_common.sh@10 -- # set +x 00:23:57.209 07:04:18 -- nvmf/common.sh@469 -- # nvmfpid=1437939 00:23:57.209 07:04:18 -- nvmf/common.sh@470 -- # waitforlisten 1437939 00:23:57.209 07:04:18 -- common/autotest_common.sh@829 -- # '[' -z 1437939 ']' 00:23:57.209 07:04:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.209 07:04:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.209 07:04:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.209 07:04:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.209 07:04:18 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:57.209 07:04:18 -- common/autotest_common.sh@10 -- # set +x 00:23:57.209 [2024-12-15 07:04:18.436956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:57.209 [2024-12-15 07:04:18.437034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.209 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.209 [2024-12-15 07:04:18.508524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.209 [2024-12-15 07:04:18.547458] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:57.209 [2024-12-15 07:04:18.547568] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.209 [2024-12-15 07:04:18.547578] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.209 [2024-12-15 07:04:18.547587] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.209 [2024-12-15 07:04:18.547686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.209 [2024-12-15 07:04:18.547783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.209 [2024-12-15 07:04:18.547867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.209 [2024-12-15 07:04:18.547869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.779 07:04:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.779 07:04:19 -- common/autotest_common.sh@862 -- # return 0 00:23:57.779 07:04:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:57.779 07:04:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.779 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:57.779 07:04:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.779 07:04:19 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:57.779 07:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.779 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:57.779 [2024-12-15 07:04:19.325470] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f920d0/0x1f965a0) succeed. 00:23:57.779 [2024-12-15 07:04:19.334610] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f93670/0x1fd7c40) succeed. 00:23:58.039 07:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.039 07:04:19 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:58.039 07:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.039 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.039 Malloc0 00:23:58.039 07:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.039 07:04:19 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:58.039 07:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.039 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.039 07:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.039 07:04:19 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.039 07:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.039 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.039 07:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.039 07:04:19 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:58.039 07:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.039 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.039 [2024-12-15 07:04:19.502801] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:58.039 07:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.039 07:04:19 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:58.039 07:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.039 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.039 [2024-12-15 07:04:19.510441] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:58.039 [ 00:23:58.039 { 00:23:58.039 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:58.039 "subtype": "Discovery", 00:23:58.039 "listen_addresses": [], 00:23:58.039 "allow_any_host": true, 00:23:58.039 "hosts": [] 00:23:58.039 }, 00:23:58.039 { 00:23:58.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.039 "subtype": "NVMe", 00:23:58.039 "listen_addresses": [ 00:23:58.039 { 00:23:58.039 "transport": "RDMA", 00:23:58.039 "trtype": "RDMA", 00:23:58.039 "adrfam": "IPv4", 00:23:58.039 "traddr": "192.168.100.8", 00:23:58.039 "trsvcid": "4420" 00:23:58.039 } 00:23:58.039 ], 00:23:58.039 "allow_any_host": true, 00:23:58.039 "hosts": [], 00:23:58.039 "serial_number": "SPDK00000000000001", 00:23:58.039 "model_number": "SPDK bdev Controller", 00:23:58.039 "max_namespaces": 2, 00:23:58.039 "min_cntlid": 1, 00:23:58.039 "max_cntlid": 65519, 00:23:58.039 "namespaces": [ 00:23:58.039 { 00:23:58.039 "nsid": 1, 00:23:58.039 "bdev_name": "Malloc0", 00:23:58.039 "name": "Malloc0", 00:23:58.039 "nguid": "68FBDEB1BAAC4E209685D7264293AF9E", 00:23:58.039 "uuid": "68fbdeb1-baac-4e20-9685-d7264293af9e" 00:23:58.039 } 00:23:58.039 ] 00:23:58.039 } 00:23:58.039 ] 00:23:58.039 07:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.039 07:04:19 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:58.039 07:04:19 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:58.039 07:04:19 -- host/aer.sh@33 -- # aerpid=1438226 00:23:58.039 07:04:19 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:58.039 07:04:19 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:58.039 07:04:19 -- common/autotest_common.sh@1254 -- # local i=0 00:23:58.039 07:04:19 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:58.039 07:04:19 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:23:58.039 07:04:19 -- common/autotest_common.sh@1257 -- # i=1 00:23:58.039 07:04:19 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:23:58.039 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.039 07:04:19 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:58.039 07:04:19 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:23:58.039 07:04:19 -- common/autotest_common.sh@1257 -- # i=2 00:23:58.039 07:04:19 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:23:58.334 07:04:19 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:58.334 07:04:19 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:58.334 07:04:19 -- common/autotest_common.sh@1265 -- # return 0 00:23:58.334 07:04:19 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:58.334 07:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.334 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.334 Malloc1 00:23:58.334 07:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.334 07:04:19 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:58.334 07:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.334 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.334 07:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.334 07:04:19 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:58.334 07:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.334 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.334 [ 00:23:58.334 { 00:23:58.334 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:58.334 "subtype": "Discovery", 00:23:58.334 "listen_addresses": [], 00:23:58.334 "allow_any_host": true, 00:23:58.334 "hosts": [] 00:23:58.334 }, 00:23:58.334 { 00:23:58.334 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.334 "subtype": "NVMe", 00:23:58.334 "listen_addresses": [ 00:23:58.334 { 00:23:58.334 "transport": "RDMA", 00:23:58.334 "trtype": "RDMA", 00:23:58.334 "adrfam": "IPv4", 00:23:58.334 "traddr": "192.168.100.8", 00:23:58.334 "trsvcid": "4420" 00:23:58.334 } 00:23:58.334 ], 00:23:58.334 "allow_any_host": true, 00:23:58.334 "hosts": [], 00:23:58.334 "serial_number": "SPDK00000000000001", 00:23:58.335 "model_number": "SPDK bdev Controller", 00:23:58.335 "max_namespaces": 2, 00:23:58.335 "min_cntlid": 1, 00:23:58.335 "max_cntlid": 65519, 00:23:58.335 "namespaces": [ 00:23:58.335 { 00:23:58.335 "nsid": 1, 00:23:58.335 "bdev_name": "Malloc0", 00:23:58.335 "name": "Malloc0", 00:23:58.335 "nguid": "68FBDEB1BAAC4E209685D7264293AF9E", 00:23:58.335 "uuid": "68fbdeb1-baac-4e20-9685-d7264293af9e" 00:23:58.335 }, 00:23:58.335 { 00:23:58.335 "nsid": 2, 00:23:58.335 "bdev_name": "Malloc1", 00:23:58.335 "name": "Malloc1", 00:23:58.335 "nguid": "C46026697B2E41C0B55C6EDF5B890D37", 00:23:58.335 "uuid": "c4602669-7b2e-41c0-b55c-6edf5b890d37" 00:23:58.335 } 00:23:58.335 ] 00:23:58.335 } 00:23:58.335 ] 00:23:58.335 07:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.335 07:04:19 -- host/aer.sh@43 -- # wait 1438226 00:23:58.335 Asynchronous Event Request test 00:23:58.335 Attaching to 192.168.100.8 00:23:58.335 Attached to 192.168.100.8 00:23:58.335 Registering asynchronous event callbacks... 00:23:58.335 Starting namespace attribute notice tests for all controllers... 00:23:58.335 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:58.335 aer_cb - Changed Namespace 00:23:58.335 Cleaning up... 00:23:58.335 07:04:19 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:58.335 07:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.335 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.335 07:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.335 07:04:19 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:58.335 07:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.335 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.335 07:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.335 07:04:19 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:58.335 07:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.335 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.335 07:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.335 07:04:19 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:58.335 07:04:19 -- host/aer.sh@51 -- # nvmftestfini 00:23:58.335 07:04:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:58.335 07:04:19 -- nvmf/common.sh@116 -- # sync 00:23:58.335 07:04:19 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:58.335 07:04:19 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:58.335 07:04:19 -- nvmf/common.sh@119 -- # set +e 00:23:58.335 07:04:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:58.335 07:04:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:58.335 rmmod nvme_rdma 00:23:58.335 rmmod nvme_fabrics 00:23:58.335 07:04:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:58.335 07:04:19 -- nvmf/common.sh@123 -- # set -e 00:23:58.335 07:04:19 -- nvmf/common.sh@124 -- # return 0 00:23:58.335 07:04:19 -- nvmf/common.sh@477 -- # '[' -n 1437939 ']' 00:23:58.335 07:04:19 -- nvmf/common.sh@478 -- # killprocess 1437939 00:23:58.335 07:04:19 -- common/autotest_common.sh@936 -- # '[' -z 1437939 ']' 00:23:58.335 07:04:19 -- common/autotest_common.sh@940 -- # kill -0 1437939 00:23:58.335 07:04:19 -- common/autotest_common.sh@941 -- # uname 00:23:58.335 07:04:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:58.335 07:04:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1437939 00:23:58.594 07:04:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:58.594 07:04:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:58.594 07:04:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1437939' 00:23:58.594 killing process with pid 1437939 00:23:58.594 07:04:20 -- common/autotest_common.sh@955 -- # kill 1437939 00:23:58.594 [2024-12-15 07:04:20.024966] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:58.594 07:04:20 -- common/autotest_common.sh@960 -- # wait 1437939 00:23:58.853 07:04:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:58.853 07:04:20 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:58.853 00:23:58.853 real 0m8.331s 00:23:58.853 user 0m8.481s 00:23:58.853 sys 0m5.268s 00:23:58.853 07:04:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:58.853 07:04:20 -- common/autotest_common.sh@10 -- # set +x 00:23:58.853 ************************************ 00:23:58.853 END TEST nvmf_aer 00:23:58.853 ************************************ 00:23:58.853 07:04:20 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:23:58.853 07:04:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:58.853 07:04:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:58.853 07:04:20 -- common/autotest_common.sh@10 -- # set +x 00:23:58.853 ************************************ 00:23:58.853 START TEST nvmf_async_init 00:23:58.853 ************************************ 00:23:58.853 07:04:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:23:58.853 * Looking for test storage... 00:23:58.853 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:58.853 07:04:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:58.853 07:04:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:58.853 07:04:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:58.853 07:04:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:58.853 07:04:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:58.853 07:04:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:58.853 07:04:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:58.853 07:04:20 -- scripts/common.sh@335 -- # IFS=.-: 00:23:58.854 07:04:20 -- scripts/common.sh@335 -- # read -ra ver1 00:23:58.854 07:04:20 -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.854 07:04:20 -- scripts/common.sh@336 -- # read -ra ver2 00:23:58.854 07:04:20 -- scripts/common.sh@337 -- # local 'op=<' 00:23:58.854 07:04:20 -- scripts/common.sh@339 -- # ver1_l=2 00:23:58.854 07:04:20 -- scripts/common.sh@340 -- # ver2_l=1 00:23:58.854 07:04:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:58.854 07:04:20 -- scripts/common.sh@343 -- # case "$op" in 00:23:58.854 07:04:20 -- scripts/common.sh@344 -- # : 1 00:23:58.854 07:04:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:58.854 07:04:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.854 07:04:20 -- scripts/common.sh@364 -- # decimal 1 00:23:58.854 07:04:20 -- scripts/common.sh@352 -- # local d=1 00:23:58.854 07:04:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.854 07:04:20 -- scripts/common.sh@354 -- # echo 1 00:23:58.854 07:04:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:58.854 07:04:20 -- scripts/common.sh@365 -- # decimal 2 00:23:58.854 07:04:20 -- scripts/common.sh@352 -- # local d=2 00:23:58.854 07:04:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.854 07:04:20 -- scripts/common.sh@354 -- # echo 2 00:23:58.854 07:04:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:58.854 07:04:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:58.854 07:04:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:58.854 07:04:20 -- scripts/common.sh@367 -- # return 0 00:23:58.854 07:04:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.854 07:04:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:58.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.854 --rc genhtml_branch_coverage=1 00:23:58.854 --rc genhtml_function_coverage=1 00:23:58.854 --rc genhtml_legend=1 00:23:58.854 --rc geninfo_all_blocks=1 00:23:58.854 --rc geninfo_unexecuted_blocks=1 00:23:58.854 00:23:58.854 ' 00:23:58.854 07:04:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:58.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.854 --rc genhtml_branch_coverage=1 00:23:58.854 --rc genhtml_function_coverage=1 00:23:58.854 --rc genhtml_legend=1 00:23:58.854 --rc geninfo_all_blocks=1 00:23:58.854 --rc geninfo_unexecuted_blocks=1 00:23:58.854 00:23:58.854 ' 00:23:58.854 07:04:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:58.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.854 --rc genhtml_branch_coverage=1 00:23:58.854 --rc genhtml_function_coverage=1 00:23:58.854 --rc genhtml_legend=1 00:23:58.854 --rc geninfo_all_blocks=1 00:23:58.854 --rc geninfo_unexecuted_blocks=1 00:23:58.854 00:23:58.854 ' 00:23:58.854 07:04:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:58.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.854 --rc genhtml_branch_coverage=1 00:23:58.854 --rc genhtml_function_coverage=1 00:23:58.854 --rc genhtml_legend=1 00:23:58.854 --rc geninfo_all_blocks=1 00:23:58.854 --rc geninfo_unexecuted_blocks=1 00:23:58.854 00:23:58.854 ' 00:23:58.854 07:04:20 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.854 07:04:20 -- nvmf/common.sh@7 -- # uname -s 00:23:58.854 07:04:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.854 07:04:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.854 07:04:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.854 07:04:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.854 07:04:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.854 07:04:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.854 07:04:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.854 07:04:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.114 07:04:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.114 07:04:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.114 07:04:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:59.114 07:04:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:59.114 07:04:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.114 07:04:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.114 07:04:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.114 07:04:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:59.114 07:04:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.114 07:04:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.114 07:04:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.114 07:04:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.114 07:04:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.114 07:04:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.114 07:04:20 -- paths/export.sh@5 -- # export PATH 00:23:59.114 07:04:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.114 07:04:20 -- nvmf/common.sh@46 -- # : 0 00:23:59.114 07:04:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:59.114 07:04:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:59.114 07:04:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:59.114 07:04:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.114 07:04:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.114 07:04:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:59.114 07:04:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:59.114 07:04:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:59.114 07:04:20 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:59.114 07:04:20 -- host/async_init.sh@14 -- # null_block_size=512 00:23:59.114 07:04:20 -- host/async_init.sh@15 -- # null_bdev=null0 00:23:59.114 07:04:20 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:59.114 07:04:20 -- host/async_init.sh@20 -- # uuidgen 00:23:59.114 07:04:20 -- host/async_init.sh@20 -- # tr -d - 00:23:59.114 07:04:20 -- host/async_init.sh@20 -- # nguid=bb06751a22be4196b6dd103b9af96044 00:23:59.114 07:04:20 -- host/async_init.sh@22 -- # nvmftestinit 00:23:59.114 07:04:20 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:59.114 07:04:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.114 07:04:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:59.114 07:04:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:59.114 07:04:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:59.114 07:04:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.114 07:04:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.114 07:04:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.114 07:04:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:59.114 07:04:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:59.114 07:04:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:59.114 07:04:20 -- common/autotest_common.sh@10 -- # set +x 00:24:05.690 07:04:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:05.690 07:04:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:05.690 07:04:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:05.690 07:04:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:05.690 07:04:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:05.690 07:04:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:05.690 07:04:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:05.690 07:04:26 -- nvmf/common.sh@294 -- # net_devs=() 00:24:05.690 07:04:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:05.690 07:04:26 -- nvmf/common.sh@295 -- # e810=() 00:24:05.690 07:04:26 -- nvmf/common.sh@295 -- # local -ga e810 00:24:05.690 07:04:26 -- nvmf/common.sh@296 -- # x722=() 00:24:05.690 07:04:26 -- nvmf/common.sh@296 -- # local -ga x722 00:24:05.690 07:04:26 -- nvmf/common.sh@297 -- # mlx=() 00:24:05.690 07:04:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:05.690 07:04:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.690 07:04:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.690 07:04:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.690 07:04:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.690 07:04:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.690 07:04:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.690 07:04:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.690 07:04:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.690 07:04:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.690 07:04:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.690 07:04:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.690 07:04:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:05.690 07:04:26 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:05.690 07:04:26 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:05.690 07:04:26 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:05.690 07:04:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:05.690 07:04:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:05.690 07:04:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:05.690 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:05.690 07:04:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:05.690 07:04:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:05.690 07:04:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:05.690 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:05.690 07:04:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:05.690 07:04:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:05.690 07:04:26 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:05.690 07:04:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.690 07:04:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:05.690 07:04:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.690 07:04:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:05.690 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:05.690 07:04:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.690 07:04:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:05.690 07:04:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.690 07:04:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:05.690 07:04:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.690 07:04:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:05.690 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:05.690 07:04:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.690 07:04:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:05.690 07:04:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:05.690 07:04:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:05.690 07:04:26 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:05.690 07:04:26 -- nvmf/common.sh@57 -- # uname 00:24:05.690 07:04:26 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:05.690 07:04:26 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:05.690 07:04:26 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:05.690 07:04:26 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:05.690 07:04:26 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:05.690 07:04:26 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:05.690 07:04:26 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:05.690 07:04:26 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:05.690 07:04:26 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:05.690 07:04:26 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:05.690 07:04:26 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:05.690 07:04:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:05.690 07:04:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:05.690 07:04:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:05.690 07:04:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:05.690 07:04:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:05.690 07:04:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:05.690 07:04:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.690 07:04:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:05.690 07:04:26 -- nvmf/common.sh@104 -- # continue 2 00:24:05.690 07:04:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:05.690 07:04:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.690 07:04:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.690 07:04:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:05.690 07:04:26 -- nvmf/common.sh@104 -- # continue 2 00:24:05.690 07:04:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:05.690 07:04:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:05.690 07:04:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:05.690 07:04:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:05.690 07:04:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:05.690 07:04:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:05.690 07:04:26 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:05.690 07:04:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:05.690 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:05.690 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:05.690 altname enp217s0f0np0 00:24:05.690 altname ens818f0np0 00:24:05.690 inet 192.168.100.8/24 scope global mlx_0_0 00:24:05.690 valid_lft forever preferred_lft forever 00:24:05.690 07:04:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:05.690 07:04:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:05.690 07:04:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:05.690 07:04:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:05.690 07:04:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:05.690 07:04:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:05.690 07:04:26 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:05.690 07:04:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:05.690 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:05.690 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:05.690 altname enp217s0f1np1 00:24:05.690 altname ens818f1np1 00:24:05.690 inet 192.168.100.9/24 scope global mlx_0_1 00:24:05.690 valid_lft forever preferred_lft forever 00:24:05.690 07:04:26 -- nvmf/common.sh@410 -- # return 0 00:24:05.690 07:04:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:05.690 07:04:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:05.690 07:04:26 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:05.690 07:04:26 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:05.690 07:04:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:05.690 07:04:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:05.690 07:04:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:05.690 07:04:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:05.690 07:04:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:05.690 07:04:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:05.690 07:04:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.690 07:04:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:05.690 07:04:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:05.690 07:04:26 -- nvmf/common.sh@104 -- # continue 2 00:24:05.690 07:04:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:05.691 07:04:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.691 07:04:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:05.691 07:04:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.691 07:04:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:05.691 07:04:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:05.691 07:04:26 -- nvmf/common.sh@104 -- # continue 2 00:24:05.691 07:04:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:05.691 07:04:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:05.691 07:04:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:05.691 07:04:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:05.691 07:04:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:05.691 07:04:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:05.691 07:04:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:05.691 07:04:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:05.691 07:04:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:05.691 07:04:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:05.691 07:04:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:05.691 07:04:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:05.691 07:04:26 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:05.691 192.168.100.9' 00:24:05.691 07:04:26 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:05.691 192.168.100.9' 00:24:05.691 07:04:26 -- nvmf/common.sh@445 -- # head -n 1 00:24:05.691 07:04:27 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:05.691 07:04:27 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:05.691 192.168.100.9' 00:24:05.691 07:04:27 -- nvmf/common.sh@446 -- # tail -n +2 00:24:05.691 07:04:27 -- nvmf/common.sh@446 -- # head -n 1 00:24:05.691 07:04:27 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:05.691 07:04:27 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:05.691 07:04:27 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:05.691 07:04:27 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:05.691 07:04:27 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:05.691 07:04:27 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:05.691 07:04:27 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:05.691 07:04:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:05.691 07:04:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:05.691 07:04:27 -- common/autotest_common.sh@10 -- # set +x 00:24:05.691 07:04:27 -- nvmf/common.sh@469 -- # nvmfpid=1441490 00:24:05.691 07:04:27 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:05.691 07:04:27 -- nvmf/common.sh@470 -- # waitforlisten 1441490 00:24:05.691 07:04:27 -- common/autotest_common.sh@829 -- # '[' -z 1441490 ']' 00:24:05.691 07:04:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.691 07:04:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.691 07:04:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.691 07:04:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.691 07:04:27 -- common/autotest_common.sh@10 -- # set +x 00:24:05.691 [2024-12-15 07:04:27.096534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:05.691 [2024-12-15 07:04:27.096589] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.691 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.691 [2024-12-15 07:04:27.168324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.691 [2024-12-15 07:04:27.207029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:05.691 [2024-12-15 07:04:27.207142] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.691 [2024-12-15 07:04:27.207152] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.691 [2024-12-15 07:04:27.207161] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.691 [2024-12-15 07:04:27.207185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.630 07:04:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.630 07:04:27 -- common/autotest_common.sh@862 -- # return 0 00:24:06.630 07:04:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:06.630 07:04:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:06.630 07:04:27 -- common/autotest_common.sh@10 -- # set +x 00:24:06.630 07:04:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.630 07:04:27 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:06.630 07:04:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.630 07:04:27 -- common/autotest_common.sh@10 -- # set +x 00:24:06.630 [2024-12-15 07:04:27.977046] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10d9230/0x10dd6e0) succeed. 00:24:06.630 [2024-12-15 07:04:27.986198] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10da6e0/0x111ed80) succeed. 00:24:06.630 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.630 07:04:28 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:06.630 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.630 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.630 null0 00:24:06.630 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.630 07:04:28 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:06.630 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.630 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.630 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.630 07:04:28 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:06.630 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.630 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.630 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.630 07:04:28 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g bb06751a22be4196b6dd103b9af96044 00:24:06.630 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.630 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.630 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.630 07:04:28 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:06.630 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.630 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.630 [2024-12-15 07:04:28.063434] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:06.630 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.630 07:04:28 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:06.630 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.630 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.630 nvme0n1 00:24:06.630 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.630 07:04:28 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:06.630 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.630 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.630 [ 00:24:06.630 { 00:24:06.630 "name": "nvme0n1", 00:24:06.630 "aliases": [ 00:24:06.630 "bb06751a-22be-4196-b6dd-103b9af96044" 00:24:06.630 ], 00:24:06.630 "product_name": "NVMe disk", 00:24:06.630 "block_size": 512, 00:24:06.630 "num_blocks": 2097152, 00:24:06.630 "uuid": "bb06751a-22be-4196-b6dd-103b9af96044", 00:24:06.630 "assigned_rate_limits": { 00:24:06.630 "rw_ios_per_sec": 0, 00:24:06.630 "rw_mbytes_per_sec": 0, 00:24:06.630 "r_mbytes_per_sec": 0, 00:24:06.630 "w_mbytes_per_sec": 0 00:24:06.630 }, 00:24:06.630 "claimed": false, 00:24:06.630 "zoned": false, 00:24:06.630 "supported_io_types": { 00:24:06.630 "read": true, 00:24:06.630 "write": true, 00:24:06.630 "unmap": false, 00:24:06.630 "write_zeroes": true, 00:24:06.630 "flush": true, 00:24:06.630 "reset": true, 00:24:06.630 "compare": true, 00:24:06.630 "compare_and_write": true, 00:24:06.630 "abort": true, 00:24:06.630 "nvme_admin": true, 00:24:06.630 "nvme_io": true 00:24:06.630 }, 00:24:06.630 "memory_domains": [ 00:24:06.630 { 00:24:06.630 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:06.630 "dma_device_type": 0 00:24:06.630 } 00:24:06.630 ], 00:24:06.630 "driver_specific": { 00:24:06.630 "nvme": [ 00:24:06.630 { 00:24:06.630 "trid": { 00:24:06.630 "trtype": "RDMA", 00:24:06.630 "adrfam": "IPv4", 00:24:06.630 "traddr": "192.168.100.8", 00:24:06.630 "trsvcid": "4420", 00:24:06.630 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:06.630 }, 00:24:06.630 "ctrlr_data": { 00:24:06.630 "cntlid": 1, 00:24:06.630 "vendor_id": "0x8086", 00:24:06.630 "model_number": "SPDK bdev Controller", 00:24:06.630 "serial_number": "00000000000000000000", 00:24:06.630 "firmware_revision": "24.01.1", 00:24:06.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.630 "oacs": { 00:24:06.630 "security": 0, 00:24:06.630 "format": 0, 00:24:06.630 "firmware": 0, 00:24:06.630 "ns_manage": 0 00:24:06.630 }, 00:24:06.630 "multi_ctrlr": true, 00:24:06.630 "ana_reporting": false 00:24:06.630 }, 00:24:06.630 "vs": { 00:24:06.630 "nvme_version": "1.3" 00:24:06.630 }, 00:24:06.630 "ns_data": { 00:24:06.630 "id": 1, 00:24:06.630 "can_share": true 00:24:06.630 } 00:24:06.630 } 00:24:06.630 ], 00:24:06.630 "mp_policy": "active_passive" 00:24:06.630 } 00:24:06.630 } 00:24:06.630 ] 00:24:06.630 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.630 07:04:28 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:06.631 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.631 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.631 [2024-12-15 07:04:28.164764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:06.631 [2024-12-15 07:04:28.184364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:06.631 [2024-12-15 07:04:28.207233] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:06.631 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.631 07:04:28 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:06.631 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.631 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.631 [ 00:24:06.631 { 00:24:06.631 "name": "nvme0n1", 00:24:06.631 "aliases": [ 00:24:06.631 "bb06751a-22be-4196-b6dd-103b9af96044" 00:24:06.631 ], 00:24:06.631 "product_name": "NVMe disk", 00:24:06.631 "block_size": 512, 00:24:06.631 "num_blocks": 2097152, 00:24:06.631 "uuid": "bb06751a-22be-4196-b6dd-103b9af96044", 00:24:06.631 "assigned_rate_limits": { 00:24:06.631 "rw_ios_per_sec": 0, 00:24:06.631 "rw_mbytes_per_sec": 0, 00:24:06.631 "r_mbytes_per_sec": 0, 00:24:06.631 "w_mbytes_per_sec": 0 00:24:06.631 }, 00:24:06.631 "claimed": false, 00:24:06.631 "zoned": false, 00:24:06.631 "supported_io_types": { 00:24:06.631 "read": true, 00:24:06.631 "write": true, 00:24:06.631 "unmap": false, 00:24:06.631 "write_zeroes": true, 00:24:06.631 "flush": true, 00:24:06.631 "reset": true, 00:24:06.631 "compare": true, 00:24:06.631 "compare_and_write": true, 00:24:06.631 "abort": true, 00:24:06.631 "nvme_admin": true, 00:24:06.631 "nvme_io": true 00:24:06.631 }, 00:24:06.631 "memory_domains": [ 00:24:06.631 { 00:24:06.631 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:06.631 "dma_device_type": 0 00:24:06.631 } 00:24:06.631 ], 00:24:06.631 "driver_specific": { 00:24:06.631 "nvme": [ 00:24:06.631 { 00:24:06.631 "trid": { 00:24:06.631 "trtype": "RDMA", 00:24:06.631 "adrfam": "IPv4", 00:24:06.631 "traddr": "192.168.100.8", 00:24:06.631 "trsvcid": "4420", 00:24:06.631 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:06.631 }, 00:24:06.631 "ctrlr_data": { 00:24:06.631 "cntlid": 2, 00:24:06.631 "vendor_id": "0x8086", 00:24:06.631 "model_number": "SPDK bdev Controller", 00:24:06.631 "serial_number": "00000000000000000000", 00:24:06.631 "firmware_revision": "24.01.1", 00:24:06.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.631 "oacs": { 00:24:06.631 "security": 0, 00:24:06.631 "format": 0, 00:24:06.631 "firmware": 0, 00:24:06.631 "ns_manage": 0 00:24:06.631 }, 00:24:06.631 "multi_ctrlr": true, 00:24:06.631 "ana_reporting": false 00:24:06.631 }, 00:24:06.631 "vs": { 00:24:06.631 "nvme_version": "1.3" 00:24:06.631 }, 00:24:06.631 "ns_data": { 00:24:06.631 "id": 1, 00:24:06.631 "can_share": true 00:24:06.631 } 00:24:06.631 } 00:24:06.631 ], 00:24:06.631 "mp_policy": "active_passive" 00:24:06.631 } 00:24:06.631 } 00:24:06.631 ] 00:24:06.631 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.631 07:04:28 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.631 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.631 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.631 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.631 07:04:28 -- host/async_init.sh@53 -- # mktemp 00:24:06.631 07:04:28 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.HzZPnm8qrZ 00:24:06.631 07:04:28 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:06.631 07:04:28 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.HzZPnm8qrZ 00:24:06.631 07:04:28 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:06.631 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.631 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.891 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.891 07:04:28 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:24:06.891 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.891 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.891 [2024-12-15 07:04:28.274640] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:06.891 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.891 07:04:28 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HzZPnm8qrZ 00:24:06.891 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.891 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.891 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.891 07:04:28 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HzZPnm8qrZ 00:24:06.891 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.891 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.891 [2024-12-15 07:04:28.290666] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.891 nvme0n1 00:24:06.891 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.891 07:04:28 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:06.891 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.891 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.891 [ 00:24:06.891 { 00:24:06.891 "name": "nvme0n1", 00:24:06.891 "aliases": [ 00:24:06.891 "bb06751a-22be-4196-b6dd-103b9af96044" 00:24:06.891 ], 00:24:06.891 "product_name": "NVMe disk", 00:24:06.891 "block_size": 512, 00:24:06.891 "num_blocks": 2097152, 00:24:06.891 "uuid": "bb06751a-22be-4196-b6dd-103b9af96044", 00:24:06.891 "assigned_rate_limits": { 00:24:06.891 "rw_ios_per_sec": 0, 00:24:06.891 "rw_mbytes_per_sec": 0, 00:24:06.891 "r_mbytes_per_sec": 0, 00:24:06.891 "w_mbytes_per_sec": 0 00:24:06.891 }, 00:24:06.891 "claimed": false, 00:24:06.891 "zoned": false, 00:24:06.891 "supported_io_types": { 00:24:06.891 "read": true, 00:24:06.891 "write": true, 00:24:06.891 "unmap": false, 00:24:06.891 "write_zeroes": true, 00:24:06.891 "flush": true, 00:24:06.891 "reset": true, 00:24:06.891 "compare": true, 00:24:06.891 "compare_and_write": true, 00:24:06.891 "abort": true, 00:24:06.891 "nvme_admin": true, 00:24:06.891 "nvme_io": true 00:24:06.891 }, 00:24:06.891 "memory_domains": [ 00:24:06.891 { 00:24:06.891 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:06.891 "dma_device_type": 0 00:24:06.891 } 00:24:06.891 ], 00:24:06.891 "driver_specific": { 00:24:06.891 "nvme": [ 00:24:06.891 { 00:24:06.891 "trid": { 00:24:06.891 "trtype": "RDMA", 00:24:06.891 "adrfam": "IPv4", 00:24:06.891 "traddr": "192.168.100.8", 00:24:06.891 "trsvcid": "4421", 00:24:06.891 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:06.891 }, 00:24:06.891 "ctrlr_data": { 00:24:06.891 "cntlid": 3, 00:24:06.891 "vendor_id": "0x8086", 00:24:06.891 "model_number": "SPDK bdev Controller", 00:24:06.891 "serial_number": "00000000000000000000", 00:24:06.891 "firmware_revision": "24.01.1", 00:24:06.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.891 "oacs": { 00:24:06.891 "security": 0, 00:24:06.891 "format": 0, 00:24:06.891 "firmware": 0, 00:24:06.891 "ns_manage": 0 00:24:06.891 }, 00:24:06.891 "multi_ctrlr": true, 00:24:06.891 "ana_reporting": false 00:24:06.891 }, 00:24:06.891 "vs": { 00:24:06.891 "nvme_version": "1.3" 00:24:06.891 }, 00:24:06.891 "ns_data": { 00:24:06.891 "id": 1, 00:24:06.891 "can_share": true 00:24:06.891 } 00:24:06.891 } 00:24:06.891 ], 00:24:06.891 "mp_policy": "active_passive" 00:24:06.891 } 00:24:06.891 } 00:24:06.891 ] 00:24:06.891 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.891 07:04:28 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.891 07:04:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.891 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:06.891 07:04:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.891 07:04:28 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.HzZPnm8qrZ 00:24:06.891 07:04:28 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:06.891 07:04:28 -- host/async_init.sh@78 -- # nvmftestfini 00:24:06.891 07:04:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:06.891 07:04:28 -- nvmf/common.sh@116 -- # sync 00:24:06.891 07:04:28 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:06.891 07:04:28 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:06.891 07:04:28 -- nvmf/common.sh@119 -- # set +e 00:24:06.891 07:04:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:06.891 07:04:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:06.891 rmmod nvme_rdma 00:24:06.891 rmmod nvme_fabrics 00:24:06.891 07:04:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:06.891 07:04:28 -- nvmf/common.sh@123 -- # set -e 00:24:06.891 07:04:28 -- nvmf/common.sh@124 -- # return 0 00:24:06.891 07:04:28 -- nvmf/common.sh@477 -- # '[' -n 1441490 ']' 00:24:06.891 07:04:28 -- nvmf/common.sh@478 -- # killprocess 1441490 00:24:06.891 07:04:28 -- common/autotest_common.sh@936 -- # '[' -z 1441490 ']' 00:24:06.891 07:04:28 -- common/autotest_common.sh@940 -- # kill -0 1441490 00:24:06.891 07:04:28 -- common/autotest_common.sh@941 -- # uname 00:24:06.891 07:04:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:06.891 07:04:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1441490 00:24:07.151 07:04:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:07.151 07:04:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:07.151 07:04:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1441490' 00:24:07.151 killing process with pid 1441490 00:24:07.151 07:04:28 -- common/autotest_common.sh@955 -- # kill 1441490 00:24:07.151 07:04:28 -- common/autotest_common.sh@960 -- # wait 1441490 00:24:07.151 07:04:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:07.151 07:04:28 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:07.151 00:24:07.151 real 0m8.413s 00:24:07.151 user 0m3.704s 00:24:07.151 sys 0m5.399s 00:24:07.151 07:04:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:07.151 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:07.151 ************************************ 00:24:07.151 END TEST nvmf_async_init 00:24:07.151 ************************************ 00:24:07.151 07:04:28 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:07.151 07:04:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:07.151 07:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:07.151 07:04:28 -- common/autotest_common.sh@10 -- # set +x 00:24:07.151 ************************************ 00:24:07.151 START TEST dma 00:24:07.151 ************************************ 00:24:07.151 07:04:28 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:07.411 * Looking for test storage... 00:24:07.412 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:07.412 07:04:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:07.412 07:04:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:07.412 07:04:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:07.412 07:04:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:07.412 07:04:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:07.412 07:04:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:07.412 07:04:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:07.412 07:04:28 -- scripts/common.sh@335 -- # IFS=.-: 00:24:07.412 07:04:28 -- scripts/common.sh@335 -- # read -ra ver1 00:24:07.412 07:04:28 -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.412 07:04:28 -- scripts/common.sh@336 -- # read -ra ver2 00:24:07.412 07:04:28 -- scripts/common.sh@337 -- # local 'op=<' 00:24:07.412 07:04:28 -- scripts/common.sh@339 -- # ver1_l=2 00:24:07.412 07:04:28 -- scripts/common.sh@340 -- # ver2_l=1 00:24:07.412 07:04:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:07.412 07:04:28 -- scripts/common.sh@343 -- # case "$op" in 00:24:07.412 07:04:28 -- scripts/common.sh@344 -- # : 1 00:24:07.412 07:04:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:07.412 07:04:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.412 07:04:28 -- scripts/common.sh@364 -- # decimal 1 00:24:07.412 07:04:28 -- scripts/common.sh@352 -- # local d=1 00:24:07.412 07:04:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.412 07:04:28 -- scripts/common.sh@354 -- # echo 1 00:24:07.412 07:04:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:07.412 07:04:28 -- scripts/common.sh@365 -- # decimal 2 00:24:07.412 07:04:28 -- scripts/common.sh@352 -- # local d=2 00:24:07.412 07:04:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.412 07:04:28 -- scripts/common.sh@354 -- # echo 2 00:24:07.412 07:04:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:07.412 07:04:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:07.412 07:04:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:07.412 07:04:28 -- scripts/common.sh@367 -- # return 0 00:24:07.412 07:04:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.412 07:04:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:07.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.412 --rc genhtml_branch_coverage=1 00:24:07.412 --rc genhtml_function_coverage=1 00:24:07.412 --rc genhtml_legend=1 00:24:07.412 --rc geninfo_all_blocks=1 00:24:07.412 --rc geninfo_unexecuted_blocks=1 00:24:07.412 00:24:07.412 ' 00:24:07.412 07:04:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:07.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.412 --rc genhtml_branch_coverage=1 00:24:07.412 --rc genhtml_function_coverage=1 00:24:07.412 --rc genhtml_legend=1 00:24:07.412 --rc geninfo_all_blocks=1 00:24:07.412 --rc geninfo_unexecuted_blocks=1 00:24:07.412 00:24:07.412 ' 00:24:07.412 07:04:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:07.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.412 --rc genhtml_branch_coverage=1 00:24:07.412 --rc genhtml_function_coverage=1 00:24:07.412 --rc genhtml_legend=1 00:24:07.412 --rc geninfo_all_blocks=1 00:24:07.412 --rc geninfo_unexecuted_blocks=1 00:24:07.412 00:24:07.412 ' 00:24:07.412 07:04:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:07.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.412 --rc genhtml_branch_coverage=1 00:24:07.412 --rc genhtml_function_coverage=1 00:24:07.412 --rc genhtml_legend=1 00:24:07.412 --rc geninfo_all_blocks=1 00:24:07.412 --rc geninfo_unexecuted_blocks=1 00:24:07.412 00:24:07.412 ' 00:24:07.412 07:04:28 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.412 07:04:28 -- nvmf/common.sh@7 -- # uname -s 00:24:07.412 07:04:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.412 07:04:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.412 07:04:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.412 07:04:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.412 07:04:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.412 07:04:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.412 07:04:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.412 07:04:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.412 07:04:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.412 07:04:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.412 07:04:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:07.412 07:04:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:07.412 07:04:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.412 07:04:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.412 07:04:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.412 07:04:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:07.412 07:04:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.412 07:04:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.412 07:04:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.412 07:04:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.412 07:04:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.412 07:04:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.412 07:04:28 -- paths/export.sh@5 -- # export PATH 00:24:07.412 07:04:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.412 07:04:28 -- nvmf/common.sh@46 -- # : 0 00:24:07.412 07:04:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:07.412 07:04:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:07.412 07:04:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:07.412 07:04:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.412 07:04:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.412 07:04:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:07.412 07:04:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:07.412 07:04:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:07.412 07:04:28 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:24:07.412 07:04:28 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:24:07.412 07:04:28 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:24:07.412 07:04:28 -- host/dma.sh@18 -- # subsystem=0 00:24:07.412 07:04:28 -- host/dma.sh@93 -- # nvmftestinit 00:24:07.412 07:04:28 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:07.412 07:04:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.412 07:04:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:07.412 07:04:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:07.412 07:04:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:07.412 07:04:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.412 07:04:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.412 07:04:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.412 07:04:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:07.412 07:04:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:07.412 07:04:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:07.412 07:04:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.987 07:04:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:13.987 07:04:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:13.987 07:04:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:13.987 07:04:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:13.987 07:04:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:13.987 07:04:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:13.987 07:04:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:13.987 07:04:35 -- nvmf/common.sh@294 -- # net_devs=() 00:24:13.987 07:04:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:13.987 07:04:35 -- nvmf/common.sh@295 -- # e810=() 00:24:13.987 07:04:35 -- nvmf/common.sh@295 -- # local -ga e810 00:24:13.987 07:04:35 -- nvmf/common.sh@296 -- # x722=() 00:24:13.987 07:04:35 -- nvmf/common.sh@296 -- # local -ga x722 00:24:13.987 07:04:35 -- nvmf/common.sh@297 -- # mlx=() 00:24:13.987 07:04:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:13.987 07:04:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.987 07:04:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.987 07:04:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.987 07:04:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.987 07:04:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.987 07:04:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.987 07:04:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.987 07:04:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.987 07:04:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.987 07:04:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.987 07:04:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.987 07:04:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:13.987 07:04:35 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:13.987 07:04:35 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:13.987 07:04:35 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:13.987 07:04:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:13.987 07:04:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:13.987 07:04:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:13.987 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:13.987 07:04:35 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:13.987 07:04:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:13.987 07:04:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:13.987 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:13.987 07:04:35 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:13.987 07:04:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:13.987 07:04:35 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:13.987 07:04:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.987 07:04:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:13.987 07:04:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.987 07:04:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:13.987 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:13.987 07:04:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.987 07:04:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:13.987 07:04:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.987 07:04:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:13.987 07:04:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.987 07:04:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:13.987 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:13.987 07:04:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.987 07:04:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:13.987 07:04:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:13.987 07:04:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:13.987 07:04:35 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:13.987 07:04:35 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:13.987 07:04:35 -- nvmf/common.sh@57 -- # uname 00:24:13.987 07:04:35 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:13.987 07:04:35 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:13.987 07:04:35 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:13.987 07:04:35 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:13.987 07:04:35 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:13.987 07:04:35 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:13.987 07:04:35 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:13.987 07:04:35 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:13.987 07:04:35 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:13.987 07:04:35 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:13.987 07:04:35 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:13.987 07:04:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:13.987 07:04:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:13.987 07:04:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:13.987 07:04:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:14.247 07:04:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:14.247 07:04:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:14.247 07:04:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:14.247 07:04:35 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:14.247 07:04:35 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:14.247 07:04:35 -- nvmf/common.sh@104 -- # continue 2 00:24:14.247 07:04:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:14.247 07:04:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:14.247 07:04:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:14.247 07:04:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:14.247 07:04:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:14.247 07:04:35 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:14.247 07:04:35 -- nvmf/common.sh@104 -- # continue 2 00:24:14.247 07:04:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:14.247 07:04:35 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:14.247 07:04:35 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:14.247 07:04:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:14.247 07:04:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:14.247 07:04:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:14.247 07:04:35 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:14.247 07:04:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:14.247 07:04:35 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:14.247 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:14.247 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:14.247 altname enp217s0f0np0 00:24:14.247 altname ens818f0np0 00:24:14.247 inet 192.168.100.8/24 scope global mlx_0_0 00:24:14.247 valid_lft forever preferred_lft forever 00:24:14.247 07:04:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:14.247 07:04:35 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:14.247 07:04:35 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:14.247 07:04:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:14.247 07:04:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:14.247 07:04:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:14.247 07:04:35 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:14.247 07:04:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:14.247 07:04:35 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:14.247 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:14.247 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:14.247 altname enp217s0f1np1 00:24:14.247 altname ens818f1np1 00:24:14.247 inet 192.168.100.9/24 scope global mlx_0_1 00:24:14.247 valid_lft forever preferred_lft forever 00:24:14.247 07:04:35 -- nvmf/common.sh@410 -- # return 0 00:24:14.247 07:04:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:14.247 07:04:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:14.247 07:04:35 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:14.247 07:04:35 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:14.247 07:04:35 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:14.247 07:04:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:14.247 07:04:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:14.247 07:04:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:14.247 07:04:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:14.247 07:04:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:14.247 07:04:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:14.247 07:04:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:14.247 07:04:35 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:14.247 07:04:35 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:14.247 07:04:35 -- nvmf/common.sh@104 -- # continue 2 00:24:14.247 07:04:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:14.247 07:04:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:14.247 07:04:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:14.247 07:04:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:14.247 07:04:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:14.247 07:04:35 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:14.247 07:04:35 -- nvmf/common.sh@104 -- # continue 2 00:24:14.247 07:04:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:14.247 07:04:35 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:14.247 07:04:35 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:14.247 07:04:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:14.247 07:04:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:14.247 07:04:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:14.247 07:04:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:14.247 07:04:35 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:14.247 07:04:35 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:14.247 07:04:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:14.247 07:04:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:14.247 07:04:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:14.247 07:04:35 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:14.247 192.168.100.9' 00:24:14.247 07:04:35 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:14.247 192.168.100.9' 00:24:14.247 07:04:35 -- nvmf/common.sh@445 -- # head -n 1 00:24:14.247 07:04:35 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:14.247 07:04:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:14.247 192.168.100.9' 00:24:14.247 07:04:35 -- nvmf/common.sh@446 -- # tail -n +2 00:24:14.247 07:04:35 -- nvmf/common.sh@446 -- # head -n 1 00:24:14.247 07:04:35 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:14.247 07:04:35 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:14.247 07:04:35 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:14.247 07:04:35 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:14.247 07:04:35 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:14.247 07:04:35 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:14.247 07:04:35 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:24:14.247 07:04:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:14.247 07:04:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:14.247 07:04:35 -- common/autotest_common.sh@10 -- # set +x 00:24:14.247 07:04:35 -- nvmf/common.sh@469 -- # nvmfpid=1445139 00:24:14.247 07:04:35 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:14.247 07:04:35 -- nvmf/common.sh@470 -- # waitforlisten 1445139 00:24:14.247 07:04:35 -- common/autotest_common.sh@829 -- # '[' -z 1445139 ']' 00:24:14.247 07:04:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.247 07:04:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:14.247 07:04:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.247 07:04:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:14.247 07:04:35 -- common/autotest_common.sh@10 -- # set +x 00:24:14.247 [2024-12-15 07:04:35.837082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:14.247 [2024-12-15 07:04:35.837134] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.247 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.506 [2024-12-15 07:04:35.905757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:14.506 [2024-12-15 07:04:35.943260] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:14.506 [2024-12-15 07:04:35.943367] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.506 [2024-12-15 07:04:35.943377] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.506 [2024-12-15 07:04:35.943386] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.506 [2024-12-15 07:04:35.943432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.506 [2024-12-15 07:04:35.943435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.075 07:04:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.075 07:04:36 -- common/autotest_common.sh@862 -- # return 0 00:24:15.075 07:04:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:15.075 07:04:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:15.075 07:04:36 -- common/autotest_common.sh@10 -- # set +x 00:24:15.075 07:04:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.075 07:04:36 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:15.075 07:04:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.075 07:04:36 -- common/autotest_common.sh@10 -- # set +x 00:24:15.334 [2024-12-15 07:04:36.722431] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f79b40/0x1f7dff0) succeed. 00:24:15.334 [2024-12-15 07:04:36.731335] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f7aff0/0x1fbf690) succeed. 00:24:15.334 07:04:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.334 07:04:36 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:24:15.334 07:04:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.334 07:04:36 -- common/autotest_common.sh@10 -- # set +x 00:24:15.334 Malloc0 00:24:15.334 07:04:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.334 07:04:36 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:15.334 07:04:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.334 07:04:36 -- common/autotest_common.sh@10 -- # set +x 00:24:15.334 07:04:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.334 07:04:36 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:15.334 07:04:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.334 07:04:36 -- common/autotest_common.sh@10 -- # set +x 00:24:15.334 07:04:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.334 07:04:36 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:15.334 07:04:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.334 07:04:36 -- common/autotest_common.sh@10 -- # set +x 00:24:15.334 [2024-12-15 07:04:36.892669] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:15.334 07:04:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.334 07:04:36 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:24:15.334 07:04:36 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:24:15.334 07:04:36 -- nvmf/common.sh@520 -- # config=() 00:24:15.334 07:04:36 -- nvmf/common.sh@520 -- # local subsystem config 00:24:15.334 07:04:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:15.334 07:04:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:15.334 { 00:24:15.334 "params": { 00:24:15.334 "name": "Nvme$subsystem", 00:24:15.334 "trtype": "$TEST_TRANSPORT", 00:24:15.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.334 "adrfam": "ipv4", 00:24:15.334 "trsvcid": "$NVMF_PORT", 00:24:15.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.334 "hdgst": ${hdgst:-false}, 00:24:15.334 "ddgst": ${ddgst:-false} 00:24:15.334 }, 00:24:15.334 "method": "bdev_nvme_attach_controller" 00:24:15.334 } 00:24:15.334 EOF 00:24:15.334 )") 00:24:15.334 07:04:36 -- nvmf/common.sh@542 -- # cat 00:24:15.334 07:04:36 -- nvmf/common.sh@544 -- # jq . 00:24:15.334 07:04:36 -- nvmf/common.sh@545 -- # IFS=, 00:24:15.334 07:04:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:15.334 "params": { 00:24:15.334 "name": "Nvme0", 00:24:15.334 "trtype": "rdma", 00:24:15.334 "traddr": "192.168.100.8", 00:24:15.334 "adrfam": "ipv4", 00:24:15.334 "trsvcid": "4420", 00:24:15.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.334 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:15.334 "hdgst": false, 00:24:15.334 "ddgst": false 00:24:15.334 }, 00:24:15.334 "method": "bdev_nvme_attach_controller" 00:24:15.334 }' 00:24:15.334 [2024-12-15 07:04:36.940066] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:15.335 [2024-12-15 07:04:36.940119] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445346 ] 00:24:15.335 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.594 [2024-12-15 07:04:37.008937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:15.594 [2024-12-15 07:04:37.045946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:15.594 [2024-12-15 07:04:37.045949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.953 bdev Nvme0n1 reports 1 memory domains 00:24:20.953 bdev Nvme0n1 supports RDMA memory domain 00:24:20.953 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:20.953 ========================================================================== 00:24:20.953 Latency [us] 00:24:20.953 IOPS MiB/s Average min max 00:24:20.953 Core 2: 22061.58 86.18 724.57 239.79 8655.49 00:24:20.953 Core 3: 22151.97 86.53 721.56 229.86 8713.38 00:24:20.953 ========================================================================== 00:24:20.953 Total : 44213.55 172.71 723.06 229.86 8713.38 00:24:20.953 00:24:20.953 Total operations: 221102, translate 221102 pull_push 0 memzero 0 00:24:20.954 07:04:42 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:24:20.954 07:04:42 -- host/dma.sh@107 -- # gen_malloc_json 00:24:20.954 07:04:42 -- host/dma.sh@21 -- # jq . 00:24:20.954 [2024-12-15 07:04:42.460786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:20.954 [2024-12-15 07:04:42.460843] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1446249 ] 00:24:20.954 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.954 [2024-12-15 07:04:42.527398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:20.954 [2024-12-15 07:04:42.561070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.954 [2024-12-15 07:04:42.561073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.234 bdev Malloc0 reports 1 memory domains 00:24:26.234 bdev Malloc0 doesn't support RDMA memory domain 00:24:26.234 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:26.234 ========================================================================== 00:24:26.234 Latency [us] 00:24:26.234 IOPS MiB/s Average min max 00:24:26.234 Core 2: 14921.56 58.29 1071.55 432.52 1369.17 00:24:26.234 Core 3: 15196.67 59.36 1052.11 411.88 1973.72 00:24:26.234 ========================================================================== 00:24:26.234 Total : 30118.24 117.65 1061.74 411.88 1973.72 00:24:26.234 00:24:26.234 Total operations: 150640, translate 0 pull_push 602560 memzero 0 00:24:26.234 07:04:47 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:24:26.234 07:04:47 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:24:26.234 07:04:47 -- host/dma.sh@48 -- # local subsystem=0 00:24:26.234 07:04:47 -- host/dma.sh@50 -- # jq . 00:24:26.493 Ignoring -M option 00:24:26.494 [2024-12-15 07:04:47.898657] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:26.494 [2024-12-15 07:04:47.898712] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447142 ] 00:24:26.494 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.494 [2024-12-15 07:04:47.965163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:26.494 [2024-12-15 07:04:48.001238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:26.494 [2024-12-15 07:04:48.001241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.752 [2024-12-15 07:04:48.201062] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:32.112 [2024-12-15 07:04:53.229555] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:32.112 bdev 3653ace1-deab-4e96-9683-b8fbfdcb9263 reports 1 memory domains 00:24:32.112 bdev 3653ace1-deab-4e96-9683-b8fbfdcb9263 supports RDMA memory domain 00:24:32.112 Initialization complete, running randread IO for 5 sec on 2 cores 00:24:32.112 ========================================================================== 00:24:32.112 Latency [us] 00:24:32.112 IOPS MiB/s Average min max 00:24:32.112 Core 2: 73698.64 287.89 216.27 84.09 3077.54 00:24:32.112 Core 3: 70574.60 275.68 225.80 82.85 2996.47 00:24:32.112 ========================================================================== 00:24:32.112 Total : 144273.24 563.57 220.93 82.85 3077.54 00:24:32.112 00:24:32.112 Total operations: 721450, translate 0 pull_push 0 memzero 721450 00:24:32.112 07:04:53 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:24:32.112 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.112 [2024-12-15 07:04:53.524410] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:34.651 Initializing NVMe Controllers 00:24:34.651 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:24:34.651 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:24:34.652 Initialization complete. Launching workers. 00:24:34.652 ======================================================== 00:24:34.652 Latency(us) 00:24:34.652 Device Information : IOPS MiB/s Average min max 00:24:34.652 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2024.69 7.91 7964.40 4986.54 9977.57 00:24:34.652 ======================================================== 00:24:34.652 Total : 2024.69 7.91 7964.40 4986.54 9977.57 00:24:34.652 00:24:34.652 07:04:55 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:24:34.652 07:04:55 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:24:34.652 07:04:55 -- host/dma.sh@48 -- # local subsystem=0 00:24:34.652 07:04:55 -- host/dma.sh@50 -- # jq . 00:24:34.652 [2024-12-15 07:04:55.869574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:34.652 [2024-12-15 07:04:55.869627] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448486 ] 00:24:34.652 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.652 [2024-12-15 07:04:55.936757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:34.652 [2024-12-15 07:04:55.972412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.652 [2024-12-15 07:04:55.972415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.652 [2024-12-15 07:04:56.179344] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:39.929 [2024-12-15 07:05:01.210642] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:39.929 bdev 1c6da07a-0613-4a29-95a6-828cfc1947af reports 1 memory domains 00:24:39.929 bdev 1c6da07a-0613-4a29-95a6-828cfc1947af supports RDMA memory domain 00:24:39.929 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:39.929 ========================================================================== 00:24:39.929 Latency [us] 00:24:39.929 IOPS MiB/s Average min max 00:24:39.929 Core 2: 19464.20 76.03 821.31 50.29 9220.24 00:24:39.929 Core 3: 19831.55 77.47 806.12 13.34 9404.52 00:24:39.929 ========================================================================== 00:24:39.929 Total : 39295.76 153.50 813.65 13.34 9404.52 00:24:39.929 00:24:39.929 Total operations: 196507, translate 196404 pull_push 0 memzero 103 00:24:39.929 07:05:01 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:24:39.929 07:05:01 -- host/dma.sh@120 -- # nvmftestfini 00:24:39.929 07:05:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:39.929 07:05:01 -- nvmf/common.sh@116 -- # sync 00:24:39.929 07:05:01 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:39.929 07:05:01 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:39.929 07:05:01 -- nvmf/common.sh@119 -- # set +e 00:24:39.929 07:05:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:39.929 07:05:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:39.929 rmmod nvme_rdma 00:24:39.929 rmmod nvme_fabrics 00:24:39.929 07:05:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:39.929 07:05:01 -- nvmf/common.sh@123 -- # set -e 00:24:39.929 07:05:01 -- nvmf/common.sh@124 -- # return 0 00:24:39.929 07:05:01 -- nvmf/common.sh@477 -- # '[' -n 1445139 ']' 00:24:39.929 07:05:01 -- nvmf/common.sh@478 -- # killprocess 1445139 00:24:39.929 07:05:01 -- common/autotest_common.sh@936 -- # '[' -z 1445139 ']' 00:24:39.929 07:05:01 -- common/autotest_common.sh@940 -- # kill -0 1445139 00:24:39.929 07:05:01 -- common/autotest_common.sh@941 -- # uname 00:24:39.929 07:05:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:39.929 07:05:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1445139 00:24:39.929 07:05:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:39.930 07:05:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:39.930 07:05:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1445139' 00:24:39.930 killing process with pid 1445139 00:24:39.930 07:05:01 -- common/autotest_common.sh@955 -- # kill 1445139 00:24:39.930 07:05:01 -- common/autotest_common.sh@960 -- # wait 1445139 00:24:40.500 07:05:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:40.500 07:05:01 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:40.500 00:24:40.500 real 0m33.042s 00:24:40.500 user 1m36.266s 00:24:40.500 sys 0m6.396s 00:24:40.500 07:05:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:40.500 07:05:01 -- common/autotest_common.sh@10 -- # set +x 00:24:40.500 ************************************ 00:24:40.500 END TEST dma 00:24:40.500 ************************************ 00:24:40.500 07:05:01 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:24:40.500 07:05:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:40.500 07:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:40.500 07:05:01 -- common/autotest_common.sh@10 -- # set +x 00:24:40.500 ************************************ 00:24:40.500 START TEST nvmf_identify 00:24:40.500 ************************************ 00:24:40.500 07:05:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:24:40.500 * Looking for test storage... 00:24:40.500 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:40.500 07:05:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:40.500 07:05:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:40.500 07:05:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:40.500 07:05:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:40.500 07:05:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:40.500 07:05:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:40.500 07:05:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:40.500 07:05:02 -- scripts/common.sh@335 -- # IFS=.-: 00:24:40.500 07:05:02 -- scripts/common.sh@335 -- # read -ra ver1 00:24:40.500 07:05:02 -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.500 07:05:02 -- scripts/common.sh@336 -- # read -ra ver2 00:24:40.500 07:05:02 -- scripts/common.sh@337 -- # local 'op=<' 00:24:40.500 07:05:02 -- scripts/common.sh@339 -- # ver1_l=2 00:24:40.500 07:05:02 -- scripts/common.sh@340 -- # ver2_l=1 00:24:40.500 07:05:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:40.500 07:05:02 -- scripts/common.sh@343 -- # case "$op" in 00:24:40.500 07:05:02 -- scripts/common.sh@344 -- # : 1 00:24:40.500 07:05:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:40.500 07:05:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.500 07:05:02 -- scripts/common.sh@364 -- # decimal 1 00:24:40.500 07:05:02 -- scripts/common.sh@352 -- # local d=1 00:24:40.500 07:05:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.500 07:05:02 -- scripts/common.sh@354 -- # echo 1 00:24:40.500 07:05:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:40.500 07:05:02 -- scripts/common.sh@365 -- # decimal 2 00:24:40.500 07:05:02 -- scripts/common.sh@352 -- # local d=2 00:24:40.500 07:05:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.500 07:05:02 -- scripts/common.sh@354 -- # echo 2 00:24:40.500 07:05:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:40.500 07:05:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:40.500 07:05:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:40.500 07:05:02 -- scripts/common.sh@367 -- # return 0 00:24:40.500 07:05:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.500 07:05:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:40.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.500 --rc genhtml_branch_coverage=1 00:24:40.500 --rc genhtml_function_coverage=1 00:24:40.500 --rc genhtml_legend=1 00:24:40.500 --rc geninfo_all_blocks=1 00:24:40.500 --rc geninfo_unexecuted_blocks=1 00:24:40.500 00:24:40.500 ' 00:24:40.500 07:05:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:40.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.500 --rc genhtml_branch_coverage=1 00:24:40.500 --rc genhtml_function_coverage=1 00:24:40.500 --rc genhtml_legend=1 00:24:40.500 --rc geninfo_all_blocks=1 00:24:40.500 --rc geninfo_unexecuted_blocks=1 00:24:40.500 00:24:40.500 ' 00:24:40.500 07:05:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:40.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.500 --rc genhtml_branch_coverage=1 00:24:40.500 --rc genhtml_function_coverage=1 00:24:40.500 --rc genhtml_legend=1 00:24:40.500 --rc geninfo_all_blocks=1 00:24:40.500 --rc geninfo_unexecuted_blocks=1 00:24:40.500 00:24:40.500 ' 00:24:40.500 07:05:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:40.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.500 --rc genhtml_branch_coverage=1 00:24:40.500 --rc genhtml_function_coverage=1 00:24:40.500 --rc genhtml_legend=1 00:24:40.500 --rc geninfo_all_blocks=1 00:24:40.500 --rc geninfo_unexecuted_blocks=1 00:24:40.500 00:24:40.500 ' 00:24:40.500 07:05:02 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.500 07:05:02 -- nvmf/common.sh@7 -- # uname -s 00:24:40.500 07:05:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.500 07:05:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.500 07:05:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.500 07:05:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.500 07:05:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.500 07:05:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.500 07:05:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.500 07:05:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.500 07:05:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.500 07:05:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.500 07:05:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:40.500 07:05:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:40.500 07:05:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.500 07:05:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.500 07:05:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.500 07:05:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:40.500 07:05:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.500 07:05:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.500 07:05:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.500 07:05:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.500 07:05:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.500 07:05:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.500 07:05:02 -- paths/export.sh@5 -- # export PATH 00:24:40.500 07:05:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.500 07:05:02 -- nvmf/common.sh@46 -- # : 0 00:24:40.500 07:05:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:40.500 07:05:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:40.500 07:05:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:40.500 07:05:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.500 07:05:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.500 07:05:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:40.500 07:05:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:40.500 07:05:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:40.500 07:05:02 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:40.500 07:05:02 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:40.500 07:05:02 -- host/identify.sh@14 -- # nvmftestinit 00:24:40.500 07:05:02 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:40.500 07:05:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.500 07:05:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:40.500 07:05:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:40.500 07:05:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:40.500 07:05:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.501 07:05:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.501 07:05:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.501 07:05:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:40.501 07:05:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:40.501 07:05:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:40.501 07:05:02 -- common/autotest_common.sh@10 -- # set +x 00:24:47.076 07:05:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:47.076 07:05:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:47.076 07:05:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:47.076 07:05:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:47.076 07:05:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:47.076 07:05:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:47.076 07:05:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:47.076 07:05:08 -- nvmf/common.sh@294 -- # net_devs=() 00:24:47.076 07:05:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:47.076 07:05:08 -- nvmf/common.sh@295 -- # e810=() 00:24:47.076 07:05:08 -- nvmf/common.sh@295 -- # local -ga e810 00:24:47.076 07:05:08 -- nvmf/common.sh@296 -- # x722=() 00:24:47.076 07:05:08 -- nvmf/common.sh@296 -- # local -ga x722 00:24:47.076 07:05:08 -- nvmf/common.sh@297 -- # mlx=() 00:24:47.076 07:05:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:47.076 07:05:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.076 07:05:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.076 07:05:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.076 07:05:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.076 07:05:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.076 07:05:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.076 07:05:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.076 07:05:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.076 07:05:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.076 07:05:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.076 07:05:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.076 07:05:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:47.076 07:05:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:47.076 07:05:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:47.076 07:05:08 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:47.076 07:05:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:47.076 07:05:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:47.076 07:05:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:47.076 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:47.076 07:05:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:47.076 07:05:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:47.076 07:05:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:47.076 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:47.076 07:05:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:47.076 07:05:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:47.076 07:05:08 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:47.076 07:05:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.076 07:05:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:47.076 07:05:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.076 07:05:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:47.076 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:47.076 07:05:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.076 07:05:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:47.076 07:05:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.076 07:05:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:47.076 07:05:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.076 07:05:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:47.076 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:47.076 07:05:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.076 07:05:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:47.076 07:05:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:47.076 07:05:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:47.076 07:05:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:47.076 07:05:08 -- nvmf/common.sh@57 -- # uname 00:24:47.076 07:05:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:47.076 07:05:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:47.076 07:05:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:47.076 07:05:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:47.076 07:05:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:47.076 07:05:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:47.076 07:05:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:47.076 07:05:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:47.076 07:05:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:47.076 07:05:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:47.076 07:05:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:47.076 07:05:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:47.076 07:05:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:47.076 07:05:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:47.076 07:05:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:47.076 07:05:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:47.076 07:05:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:47.076 07:05:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:47.076 07:05:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:47.076 07:05:08 -- nvmf/common.sh@104 -- # continue 2 00:24:47.076 07:05:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:47.076 07:05:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:47.076 07:05:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:47.076 07:05:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:47.076 07:05:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:47.076 07:05:08 -- nvmf/common.sh@104 -- # continue 2 00:24:47.076 07:05:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:47.076 07:05:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:47.076 07:05:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:47.076 07:05:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:47.077 07:05:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:47.077 07:05:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:47.077 07:05:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:47.077 07:05:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:47.077 07:05:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:47.077 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:47.077 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:47.077 altname enp217s0f0np0 00:24:47.077 altname ens818f0np0 00:24:47.077 inet 192.168.100.8/24 scope global mlx_0_0 00:24:47.077 valid_lft forever preferred_lft forever 00:24:47.077 07:05:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:47.077 07:05:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:47.077 07:05:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:47.077 07:05:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:47.077 07:05:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:47.077 07:05:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:47.077 07:05:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:47.077 07:05:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:47.077 07:05:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:47.077 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:47.077 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:47.077 altname enp217s0f1np1 00:24:47.077 altname ens818f1np1 00:24:47.077 inet 192.168.100.9/24 scope global mlx_0_1 00:24:47.077 valid_lft forever preferred_lft forever 00:24:47.077 07:05:08 -- nvmf/common.sh@410 -- # return 0 00:24:47.077 07:05:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:47.077 07:05:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:47.077 07:05:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:47.077 07:05:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:47.077 07:05:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:47.077 07:05:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:47.077 07:05:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:47.077 07:05:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:47.077 07:05:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:47.077 07:05:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:47.077 07:05:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:47.077 07:05:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:47.077 07:05:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:47.077 07:05:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:47.077 07:05:08 -- nvmf/common.sh@104 -- # continue 2 00:24:47.077 07:05:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:47.077 07:05:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:47.077 07:05:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:47.077 07:05:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:47.077 07:05:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:47.077 07:05:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:47.077 07:05:08 -- nvmf/common.sh@104 -- # continue 2 00:24:47.077 07:05:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:47.077 07:05:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:47.077 07:05:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:47.077 07:05:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:47.077 07:05:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:47.077 07:05:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:47.077 07:05:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:47.077 07:05:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:47.077 07:05:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:47.077 07:05:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:47.077 07:05:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:47.077 07:05:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:47.077 07:05:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:47.077 192.168.100.9' 00:24:47.077 07:05:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:47.077 192.168.100.9' 00:24:47.077 07:05:08 -- nvmf/common.sh@445 -- # head -n 1 00:24:47.077 07:05:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:47.077 07:05:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:47.077 192.168.100.9' 00:24:47.077 07:05:08 -- nvmf/common.sh@446 -- # tail -n +2 00:24:47.077 07:05:08 -- nvmf/common.sh@446 -- # head -n 1 00:24:47.077 07:05:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:47.077 07:05:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:47.077 07:05:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:47.077 07:05:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:47.077 07:05:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:47.077 07:05:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:47.077 07:05:08 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:47.077 07:05:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:47.077 07:05:08 -- common/autotest_common.sh@10 -- # set +x 00:24:47.077 07:05:08 -- host/identify.sh@19 -- # nvmfpid=1452673 00:24:47.077 07:05:08 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:47.077 07:05:08 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:47.077 07:05:08 -- host/identify.sh@23 -- # waitforlisten 1452673 00:24:47.077 07:05:08 -- common/autotest_common.sh@829 -- # '[' -z 1452673 ']' 00:24:47.077 07:05:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.077 07:05:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.077 07:05:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.077 07:05:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.077 07:05:08 -- common/autotest_common.sh@10 -- # set +x 00:24:47.077 [2024-12-15 07:05:08.543856] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:47.077 [2024-12-15 07:05:08.543908] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.077 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.077 [2024-12-15 07:05:08.614065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:47.077 [2024-12-15 07:05:08.652800] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:47.077 [2024-12-15 07:05:08.652908] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.077 [2024-12-15 07:05:08.652918] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.077 [2024-12-15 07:05:08.652927] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.077 [2024-12-15 07:05:08.653060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.077 [2024-12-15 07:05:08.653079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.077 [2024-12-15 07:05:08.653167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.077 [2024-12-15 07:05:08.653169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.015 07:05:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.015 07:05:09 -- common/autotest_common.sh@862 -- # return 0 00:24:48.015 07:05:09 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:48.015 07:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.015 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:24:48.015 [2024-12-15 07:05:09.393336] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9620d0/0x9665a0) succeed. 00:24:48.015 [2024-12-15 07:05:09.402652] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x963670/0x9a7c40) succeed. 00:24:48.015 07:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.015 07:05:09 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:48.015 07:05:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:48.015 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:24:48.015 07:05:09 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:48.015 07:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.015 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:24:48.015 Malloc0 00:24:48.015 07:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.015 07:05:09 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:48.015 07:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.015 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:24:48.015 07:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.015 07:05:09 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:48.015 07:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.015 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:24:48.015 07:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.016 07:05:09 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:48.016 07:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.016 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:24:48.016 [2024-12-15 07:05:09.613075] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:48.016 07:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.016 07:05:09 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:48.016 07:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.016 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:24:48.016 07:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.016 07:05:09 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:48.016 07:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.016 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:24:48.016 [2024-12-15 07:05:09.628712] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:48.016 [ 00:24:48.016 { 00:24:48.016 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:48.016 "subtype": "Discovery", 00:24:48.016 "listen_addresses": [ 00:24:48.016 { 00:24:48.016 "transport": "RDMA", 00:24:48.016 "trtype": "RDMA", 00:24:48.016 "adrfam": "IPv4", 00:24:48.016 "traddr": "192.168.100.8", 00:24:48.016 "trsvcid": "4420" 00:24:48.016 } 00:24:48.016 ], 00:24:48.016 "allow_any_host": true, 00:24:48.016 "hosts": [] 00:24:48.016 }, 00:24:48.016 { 00:24:48.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.016 "subtype": "NVMe", 00:24:48.016 "listen_addresses": [ 00:24:48.016 { 00:24:48.016 "transport": "RDMA", 00:24:48.016 "trtype": "RDMA", 00:24:48.016 "adrfam": "IPv4", 00:24:48.016 "traddr": "192.168.100.8", 00:24:48.016 "trsvcid": "4420" 00:24:48.016 } 00:24:48.016 ], 00:24:48.016 "allow_any_host": true, 00:24:48.016 "hosts": [], 00:24:48.016 "serial_number": "SPDK00000000000001", 00:24:48.016 "model_number": "SPDK bdev Controller", 00:24:48.016 "max_namespaces": 32, 00:24:48.016 "min_cntlid": 1, 00:24:48.016 "max_cntlid": 65519, 00:24:48.016 "namespaces": [ 00:24:48.016 { 00:24:48.016 "nsid": 1, 00:24:48.016 "bdev_name": "Malloc0", 00:24:48.016 "name": "Malloc0", 00:24:48.016 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:48.016 "eui64": "ABCDEF0123456789", 00:24:48.016 "uuid": "0b9c8ab3-950b-47ed-8552-b8c3281c271d" 00:24:48.016 } 00:24:48.016 ] 00:24:48.016 } 00:24:48.016 ] 00:24:48.016 07:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.016 07:05:09 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:48.284 [2024-12-15 07:05:09.668879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:48.284 [2024-12-15 07:05:09.668917] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1452952 ] 00:24:48.284 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.284 [2024-12-15 07:05:09.716201] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:48.284 [2024-12-15 07:05:09.716271] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:24:48.284 [2024-12-15 07:05:09.716299] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:24:48.284 [2024-12-15 07:05:09.716304] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:24:48.284 [2024-12-15 07:05:09.716337] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:48.284 [2024-12-15 07:05:09.727519] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:24:48.284 [2024-12-15 07:05:09.737634] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:48.284 [2024-12-15 07:05:09.737646] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:24:48.284 [2024-12-15 07:05:09.737654] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737661] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737667] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737674] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737680] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737686] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737692] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737698] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737704] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737713] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737719] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737725] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737731] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737737] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737743] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737749] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737755] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737762] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737768] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737774] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737780] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737786] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737792] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737798] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737804] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737810] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737816] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737822] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737828] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737834] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737840] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737846] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:24:48.284 [2024-12-15 07:05:09.737851] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:48.284 [2024-12-15 07:05:09.737856] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:24:48.284 [2024-12-15 07:05:09.737875] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.737889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183d00 00:24:48.284 [2024-12-15 07:05:09.742983] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.284 [2024-12-15 07:05:09.742994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:48.284 [2024-12-15 07:05:09.743002] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.743009] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:48.284 [2024-12-15 07:05:09.743016] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:48.284 [2024-12-15 07:05:09.743025] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:48.284 [2024-12-15 07:05:09.743040] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.743048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.284 [2024-12-15 07:05:09.743067] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.284 [2024-12-15 07:05:09.743073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:24:48.284 [2024-12-15 07:05:09.743080] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:48.284 [2024-12-15 07:05:09.743086] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.743092] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:48.284 [2024-12-15 07:05:09.743100] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.743107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.284 [2024-12-15 07:05:09.743130] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.284 [2024-12-15 07:05:09.743136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:24:48.284 [2024-12-15 07:05:09.743143] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:48.284 [2024-12-15 07:05:09.743148] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.743156] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:48.284 [2024-12-15 07:05:09.743163] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.743171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.284 [2024-12-15 07:05:09.743186] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.284 [2024-12-15 07:05:09.743192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:48.284 [2024-12-15 07:05:09.743198] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:48.284 [2024-12-15 07:05:09.743204] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.743213] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.284 [2024-12-15 07:05:09.743220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.284 [2024-12-15 07:05:09.743241] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.285 [2024-12-15 07:05:09.743247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:48.285 [2024-12-15 07:05:09.743253] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:48.285 [2024-12-15 07:05:09.743259] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:48.285 [2024-12-15 07:05:09.743265] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743274] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:48.285 [2024-12-15 07:05:09.743380] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:48.285 [2024-12-15 07:05:09.743386] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:48.285 [2024-12-15 07:05:09.743396] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.285 [2024-12-15 07:05:09.743421] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.285 [2024-12-15 07:05:09.743426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.285 [2024-12-15 07:05:09.743433] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:48.285 [2024-12-15 07:05:09.743438] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743447] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.285 [2024-12-15 07:05:09.743471] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.285 [2024-12-15 07:05:09.743477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:48.285 [2024-12-15 07:05:09.743483] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:48.285 [2024-12-15 07:05:09.743489] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:48.285 [2024-12-15 07:05:09.743495] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743502] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:48.285 [2024-12-15 07:05:09.743510] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:48.285 [2024-12-15 07:05:09.743519] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:48.285 [2024-12-15 07:05:09.743561] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.285 [2024-12-15 07:05:09.743567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:48.285 [2024-12-15 07:05:09.743576] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:48.285 [2024-12-15 07:05:09.743582] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:48.285 [2024-12-15 07:05:09.743588] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:48.285 [2024-12-15 07:05:09.743595] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:48.285 [2024-12-15 07:05:09.743602] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:48.285 [2024-12-15 07:05:09.743608] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:48.285 [2024-12-15 07:05:09.743613] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743623] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:48.285 [2024-12-15 07:05:09.743631] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.285 [2024-12-15 07:05:09.743658] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.285 [2024-12-15 07:05:09.743664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:48.285 [2024-12-15 07:05:09.743673] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.285 [2024-12-15 07:05:09.743687] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.285 [2024-12-15 07:05:09.743700] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.285 [2024-12-15 07:05:09.743714] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.285 [2024-12-15 07:05:09.743726] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:48.285 [2024-12-15 07:05:09.743732] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743743] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:48.285 [2024-12-15 07:05:09.743750] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.285 [2024-12-15 07:05:09.743779] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.285 [2024-12-15 07:05:09.743785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:24:48.285 [2024-12-15 07:05:09.743791] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:48.285 [2024-12-15 07:05:09.743797] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:48.285 [2024-12-15 07:05:09.743803] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743812] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:48.285 [2024-12-15 07:05:09.743844] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.285 [2024-12-15 07:05:09.743850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:48.285 [2024-12-15 07:05:09.743857] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743867] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:48.285 [2024-12-15 07:05:09.743889] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183d00 00:24:48.285 [2024-12-15 07:05:09.743905] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.285 [2024-12-15 07:05:09.743927] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.285 [2024-12-15 07:05:09.743932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:48.285 [2024-12-15 07:05:09.743944] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183d00 00:24:48.285 [2024-12-15 07:05:09.743957] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743963] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.285 [2024-12-15 07:05:09.743969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:48.285 [2024-12-15 07:05:09.743979] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.743986] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.285 [2024-12-15 07:05:09.743991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:48.285 [2024-12-15 07:05:09.744000] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.744008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183d00 00:24:48.285 [2024-12-15 07:05:09.744014] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:48.285 [2024-12-15 07:05:09.744036] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.285 [2024-12-15 07:05:09.744041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:48.285 [2024-12-15 07:05:09.744052] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:48.285 ===================================================== 00:24:48.285 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:48.285 ===================================================== 00:24:48.285 Controller Capabilities/Features 00:24:48.285 ================================ 00:24:48.285 Vendor ID: 0000 00:24:48.285 Subsystem Vendor ID: 0000 00:24:48.285 Serial Number: .................... 00:24:48.286 Model Number: ........................................ 00:24:48.286 Firmware Version: 24.01.1 00:24:48.286 Recommended Arb Burst: 0 00:24:48.286 IEEE OUI Identifier: 00 00 00 00:24:48.286 Multi-path I/O 00:24:48.286 May have multiple subsystem ports: No 00:24:48.286 May have multiple controllers: No 00:24:48.286 Associated with SR-IOV VF: No 00:24:48.286 Max Data Transfer Size: 131072 00:24:48.286 Max Number of Namespaces: 0 00:24:48.286 Max Number of I/O Queues: 1024 00:24:48.286 NVMe Specification Version (VS): 1.3 00:24:48.286 NVMe Specification Version (Identify): 1.3 00:24:48.286 Maximum Queue Entries: 128 00:24:48.286 Contiguous Queues Required: Yes 00:24:48.286 Arbitration Mechanisms Supported 00:24:48.286 Weighted Round Robin: Not Supported 00:24:48.286 Vendor Specific: Not Supported 00:24:48.286 Reset Timeout: 15000 ms 00:24:48.286 Doorbell Stride: 4 bytes 00:24:48.286 NVM Subsystem Reset: Not Supported 00:24:48.286 Command Sets Supported 00:24:48.286 NVM Command Set: Supported 00:24:48.286 Boot Partition: Not Supported 00:24:48.286 Memory Page Size Minimum: 4096 bytes 00:24:48.286 Memory Page Size Maximum: 4096 bytes 00:24:48.286 Persistent Memory Region: Not Supported 00:24:48.286 Optional Asynchronous Events Supported 00:24:48.286 Namespace Attribute Notices: Not Supported 00:24:48.286 Firmware Activation Notices: Not Supported 00:24:48.286 ANA Change Notices: Not Supported 00:24:48.286 PLE Aggregate Log Change Notices: Not Supported 00:24:48.286 LBA Status Info Alert Notices: Not Supported 00:24:48.286 EGE Aggregate Log Change Notices: Not Supported 00:24:48.286 Normal NVM Subsystem Shutdown event: Not Supported 00:24:48.286 Zone Descriptor Change Notices: Not Supported 00:24:48.286 Discovery Log Change Notices: Supported 00:24:48.286 Controller Attributes 00:24:48.286 128-bit Host Identifier: Not Supported 00:24:48.286 Non-Operational Permissive Mode: Not Supported 00:24:48.286 NVM Sets: Not Supported 00:24:48.286 Read Recovery Levels: Not Supported 00:24:48.286 Endurance Groups: Not Supported 00:24:48.286 Predictable Latency Mode: Not Supported 00:24:48.286 Traffic Based Keep ALive: Not Supported 00:24:48.286 Namespace Granularity: Not Supported 00:24:48.286 SQ Associations: Not Supported 00:24:48.286 UUID List: Not Supported 00:24:48.286 Multi-Domain Subsystem: Not Supported 00:24:48.286 Fixed Capacity Management: Not Supported 00:24:48.286 Variable Capacity Management: Not Supported 00:24:48.286 Delete Endurance Group: Not Supported 00:24:48.286 Delete NVM Set: Not Supported 00:24:48.286 Extended LBA Formats Supported: Not Supported 00:24:48.286 Flexible Data Placement Supported: Not Supported 00:24:48.286 00:24:48.286 Controller Memory Buffer Support 00:24:48.286 ================================ 00:24:48.286 Supported: No 00:24:48.286 00:24:48.286 Persistent Memory Region Support 00:24:48.286 ================================ 00:24:48.286 Supported: No 00:24:48.286 00:24:48.286 Admin Command Set Attributes 00:24:48.286 ============================ 00:24:48.286 Security Send/Receive: Not Supported 00:24:48.286 Format NVM: Not Supported 00:24:48.286 Firmware Activate/Download: Not Supported 00:24:48.286 Namespace Management: Not Supported 00:24:48.286 Device Self-Test: Not Supported 00:24:48.286 Directives: Not Supported 00:24:48.286 NVMe-MI: Not Supported 00:24:48.286 Virtualization Management: Not Supported 00:24:48.286 Doorbell Buffer Config: Not Supported 00:24:48.286 Get LBA Status Capability: Not Supported 00:24:48.286 Command & Feature Lockdown Capability: Not Supported 00:24:48.286 Abort Command Limit: 1 00:24:48.286 Async Event Request Limit: 4 00:24:48.286 Number of Firmware Slots: N/A 00:24:48.286 Firmware Slot 1 Read-Only: N/A 00:24:48.286 Firmware Activation Without Reset: N/A 00:24:48.286 Multiple Update Detection Support: N/A 00:24:48.286 Firmware Update Granularity: No Information Provided 00:24:48.286 Per-Namespace SMART Log: No 00:24:48.286 Asymmetric Namespace Access Log Page: Not Supported 00:24:48.286 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:48.286 Command Effects Log Page: Not Supported 00:24:48.286 Get Log Page Extended Data: Supported 00:24:48.286 Telemetry Log Pages: Not Supported 00:24:48.286 Persistent Event Log Pages: Not Supported 00:24:48.286 Supported Log Pages Log Page: May Support 00:24:48.286 Commands Supported & Effects Log Page: Not Supported 00:24:48.286 Feature Identifiers & Effects Log Page:May Support 00:24:48.286 NVMe-MI Commands & Effects Log Page: May Support 00:24:48.286 Data Area 4 for Telemetry Log: Not Supported 00:24:48.286 Error Log Page Entries Supported: 128 00:24:48.286 Keep Alive: Not Supported 00:24:48.286 00:24:48.286 NVM Command Set Attributes 00:24:48.286 ========================== 00:24:48.286 Submission Queue Entry Size 00:24:48.286 Max: 1 00:24:48.286 Min: 1 00:24:48.286 Completion Queue Entry Size 00:24:48.286 Max: 1 00:24:48.286 Min: 1 00:24:48.286 Number of Namespaces: 0 00:24:48.286 Compare Command: Not Supported 00:24:48.286 Write Uncorrectable Command: Not Supported 00:24:48.286 Dataset Management Command: Not Supported 00:24:48.286 Write Zeroes Command: Not Supported 00:24:48.286 Set Features Save Field: Not Supported 00:24:48.286 Reservations: Not Supported 00:24:48.286 Timestamp: Not Supported 00:24:48.286 Copy: Not Supported 00:24:48.286 Volatile Write Cache: Not Present 00:24:48.286 Atomic Write Unit (Normal): 1 00:24:48.286 Atomic Write Unit (PFail): 1 00:24:48.286 Atomic Compare & Write Unit: 1 00:24:48.286 Fused Compare & Write: Supported 00:24:48.286 Scatter-Gather List 00:24:48.286 SGL Command Set: Supported 00:24:48.286 SGL Keyed: Supported 00:24:48.286 SGL Bit Bucket Descriptor: Not Supported 00:24:48.286 SGL Metadata Pointer: Not Supported 00:24:48.286 Oversized SGL: Not Supported 00:24:48.286 SGL Metadata Address: Not Supported 00:24:48.286 SGL Offset: Supported 00:24:48.286 Transport SGL Data Block: Not Supported 00:24:48.286 Replay Protected Memory Block: Not Supported 00:24:48.286 00:24:48.286 Firmware Slot Information 00:24:48.286 ========================= 00:24:48.286 Active slot: 0 00:24:48.286 00:24:48.286 00:24:48.286 Error Log 00:24:48.286 ========= 00:24:48.286 00:24:48.286 Active Namespaces 00:24:48.286 ================= 00:24:48.286 Discovery Log Page 00:24:48.286 ================== 00:24:48.286 Generation Counter: 2 00:24:48.286 Number of Records: 2 00:24:48.286 Record Format: 0 00:24:48.286 00:24:48.286 Discovery Log Entry 0 00:24:48.286 ---------------------- 00:24:48.286 Transport Type: 1 (RDMA) 00:24:48.286 Address Family: 1 (IPv4) 00:24:48.286 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:48.286 Entry Flags: 00:24:48.286 Duplicate Returned Information: 1 00:24:48.286 Explicit Persistent Connection Support for Discovery: 1 00:24:48.286 Transport Requirements: 00:24:48.286 Secure Channel: Not Required 00:24:48.286 Port ID: 0 (0x0000) 00:24:48.286 Controller ID: 65535 (0xffff) 00:24:48.286 Admin Max SQ Size: 128 00:24:48.286 Transport Service Identifier: 4420 00:24:48.286 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:48.286 Transport Address: 192.168.100.8 00:24:48.286 Transport Specific Address Subtype - RDMA 00:24:48.286 RDMA QP Service Type: 1 (Reliable Connected) 00:24:48.286 RDMA Provider Type: 1 (No provider specified) 00:24:48.286 RDMA CM Service: 1 (RDMA_CM) 00:24:48.286 Discovery Log Entry 1 00:24:48.286 ---------------------- 00:24:48.286 Transport Type: 1 (RDMA) 00:24:48.286 Address Family: 1 (IPv4) 00:24:48.286 Subsystem Type: 2 (NVM Subsystem) 00:24:48.286 Entry Flags: 00:24:48.286 Duplicate Returned Information: 0 00:24:48.286 Explicit Persistent Connection Support for Discovery: 0 00:24:48.286 Transport Requirements: 00:24:48.286 Secure Channel: Not Required 00:24:48.286 Port ID: 0 (0x0000) 00:24:48.286 Controller ID: 65535 (0xffff) 00:24:48.286 Admin Max SQ Size: [2024-12-15 07:05:09.744125] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:48.286 [2024-12-15 07:05:09.744135] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 64123 doesn't match qid 00:24:48.286 [2024-12-15 07:05:09.744148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32633 cdw0:5 sqhd:7e28 p:0 m:0 dnr:0 00:24:48.286 [2024-12-15 07:05:09.744156] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 64123 doesn't match qid 00:24:48.286 [2024-12-15 07:05:09.744165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32633 cdw0:5 sqhd:7e28 p:0 m:0 dnr:0 00:24:48.286 [2024-12-15 07:05:09.744171] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 64123 doesn't match qid 00:24:48.286 [2024-12-15 07:05:09.744179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32633 cdw0:5 sqhd:7e28 p:0 m:0 dnr:0 00:24:48.286 [2024-12-15 07:05:09.744185] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 64123 doesn't match qid 00:24:48.287 [2024-12-15 07:05:09.744193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32633 cdw0:5 sqhd:7e28 p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744202] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744233] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744247] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744261] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744281] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744294] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:48.287 [2024-12-15 07:05:09.744300] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:48.287 [2024-12-15 07:05:09.744307] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744315] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744340] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744352] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744361] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744387] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744399] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744407] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744433] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744445] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744454] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744478] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744490] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744499] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744525] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744537] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744545] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744571] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744583] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744591] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744617] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744628] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744637] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744667] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744678] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744687] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744711] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744723] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744732] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744757] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744769] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744778] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744805] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744817] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744825] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744847] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744858] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744867] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744891] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744902] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744911] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744934] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.744946] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744955] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.744964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.744989] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.744994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:48.287 [2024-12-15 07:05:09.745001] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.745009] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.287 [2024-12-15 07:05:09.745017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.287 [2024-12-15 07:05:09.745036] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.287 [2024-12-15 07:05:09.745042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745048] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745057] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745088] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745100] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745108] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745135] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745147] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745156] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745177] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745189] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745198] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745225] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745236] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745248] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745272] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745283] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745292] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745315] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745327] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745335] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745359] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745371] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745379] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745404] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745416] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745425] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745448] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745460] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745468] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745494] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745505] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745515] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745547] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745558] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745567] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745596] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745608] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745616] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745645] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745657] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745666] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745689] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745701] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745709] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.288 [2024-12-15 07:05:09.745735] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.288 [2024-12-15 07:05:09.745740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:24:48.288 [2024-12-15 07:05:09.745746] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:48.288 [2024-12-15 07:05:09.745755] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.745763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.745777] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.745782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.745790] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.745798] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.745806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.745828] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.745833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.745839] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.745848] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.745855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.745877] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.745883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.745889] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.745897] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.745905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.745925] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.745930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.745936] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.745945] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.745953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.745968] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.745974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.745984] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.745992] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746014] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.746019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.746026] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746034] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746057] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.746063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.746070] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746079] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746106] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.746112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.746118] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746127] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746150] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.746155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.746162] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746170] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746195] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.746201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.746207] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746216] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746241] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.746246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.746252] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746261] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746290] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.746296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.746302] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746310] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746334] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.746340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.746347] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746355] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746386] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.746392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.746398] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746406] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746430] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.746435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.746441] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746450] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746477] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.746483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.746489] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746497] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746527] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.289 [2024-12-15 07:05:09.746532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:48.289 [2024-12-15 07:05:09.746539] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746547] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.289 [2024-12-15 07:05:09.746555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.289 [2024-12-15 07:05:09.746574] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.290 [2024-12-15 07:05:09.746580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:48.290 [2024-12-15 07:05:09.746586] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746594] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.290 [2024-12-15 07:05:09.746622] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.290 [2024-12-15 07:05:09.746629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:48.290 [2024-12-15 07:05:09.746636] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746644] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.290 [2024-12-15 07:05:09.746673] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.290 [2024-12-15 07:05:09.746679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:48.290 [2024-12-15 07:05:09.746685] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746694] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.290 [2024-12-15 07:05:09.746719] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.290 [2024-12-15 07:05:09.746724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:48.290 [2024-12-15 07:05:09.746730] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746739] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.290 [2024-12-15 07:05:09.746762] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.290 [2024-12-15 07:05:09.746768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:48.290 [2024-12-15 07:05:09.746774] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746783] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.290 [2024-12-15 07:05:09.746814] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.290 [2024-12-15 07:05:09.746819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:48.290 [2024-12-15 07:05:09.746825] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746834] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.290 [2024-12-15 07:05:09.746859] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.290 [2024-12-15 07:05:09.746865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:24:48.290 [2024-12-15 07:05:09.746871] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746879] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.290 [2024-12-15 07:05:09.746902] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.290 [2024-12-15 07:05:09.746908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:24:48.290 [2024-12-15 07:05:09.746914] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746922] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.290 [2024-12-15 07:05:09.746950] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.290 [2024-12-15 07:05:09.746955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:24:48.290 [2024-12-15 07:05:09.746961] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.746970] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.750985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.290 [2024-12-15 07:05:09.750999] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.290 [2024-12-15 07:05:09.751005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000b p:0 m:0 dnr:0 00:24:48.290 [2024-12-15 07:05:09.751011] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.751018] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:24:48.290 128 00:24:48.290 Transport Service Identifier: 4420 00:24:48.290 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:48.290 Transport Address: 192.168.100.8 00:24:48.290 Transport Specific Address Subtype - RDMA 00:24:48.290 RDMA QP Service Type: 1 (Reliable Connected) 00:24:48.290 RDMA Provider Type: 1 (No provider specified) 00:24:48.290 RDMA CM Service: 1 (RDMA_CM) 00:24:48.290 07:05:09 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:48.290 [2024-12-15 07:05:09.818490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:48.290 [2024-12-15 07:05:09.818527] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1452961 ] 00:24:48.290 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.290 [2024-12-15 07:05:09.864107] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:48.290 [2024-12-15 07:05:09.864173] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:24:48.290 [2024-12-15 07:05:09.864191] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:24:48.290 [2024-12-15 07:05:09.864196] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:24:48.290 [2024-12-15 07:05:09.864223] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:48.290 [2024-12-15 07:05:09.882460] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:24:48.290 [2024-12-15 07:05:09.898064] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:48.290 [2024-12-15 07:05:09.898074] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:24:48.290 [2024-12-15 07:05:09.898081] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898088] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898094] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898100] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898106] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898112] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898118] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898124] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898130] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898136] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898142] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898148] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898154] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898160] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898166] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898172] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898178] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898184] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898190] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898196] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898202] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898208] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898214] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898220] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898226] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898232] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898238] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:48.290 [2024-12-15 07:05:09.898244] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.898250] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.898256] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.898265] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.898271] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:24:48.291 [2024-12-15 07:05:09.898276] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:24:48.291 [2024-12-15 07:05:09.898280] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:24:48.291 [2024-12-15 07:05:09.898295] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.898307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183d00 00:24:48.291 [2024-12-15 07:05:09.904980] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.291 [2024-12-15 07:05:09.904992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:48.291 [2024-12-15 07:05:09.904999] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905007] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:48.291 [2024-12-15 07:05:09.905013] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:48.291 [2024-12-15 07:05:09.905019] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:48.291 [2024-12-15 07:05:09.905032] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.291 [2024-12-15 07:05:09.905065] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.291 [2024-12-15 07:05:09.905071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:24:48.291 [2024-12-15 07:05:09.905077] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:48.291 [2024-12-15 07:05:09.905083] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905090] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:48.291 [2024-12-15 07:05:09.905097] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.291 [2024-12-15 07:05:09.905121] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.291 [2024-12-15 07:05:09.905126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:24:48.291 [2024-12-15 07:05:09.905133] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:48.291 [2024-12-15 07:05:09.905139] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905146] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:48.291 [2024-12-15 07:05:09.905153] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.291 [2024-12-15 07:05:09.905179] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.291 [2024-12-15 07:05:09.905184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:48.291 [2024-12-15 07:05:09.905193] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:48.291 [2024-12-15 07:05:09.905199] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905207] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.291 [2024-12-15 07:05:09.905235] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.291 [2024-12-15 07:05:09.905240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:48.291 [2024-12-15 07:05:09.905246] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:48.291 [2024-12-15 07:05:09.905252] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:48.291 [2024-12-15 07:05:09.905258] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905265] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:48.291 [2024-12-15 07:05:09.905371] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:48.291 [2024-12-15 07:05:09.905376] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:48.291 [2024-12-15 07:05:09.905384] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.291 [2024-12-15 07:05:09.905412] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.291 [2024-12-15 07:05:09.905417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.291 [2024-12-15 07:05:09.905423] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:48.291 [2024-12-15 07:05:09.905429] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905437] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.291 [2024-12-15 07:05:09.905465] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.291 [2024-12-15 07:05:09.905470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:48.291 [2024-12-15 07:05:09.905476] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:48.291 [2024-12-15 07:05:09.905482] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:48.291 [2024-12-15 07:05:09.905488] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905495] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:48.291 [2024-12-15 07:05:09.905503] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:48.291 [2024-12-15 07:05:09.905514] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:48.291 [2024-12-15 07:05:09.905570] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.291 [2024-12-15 07:05:09.905576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:48.291 [2024-12-15 07:05:09.905584] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:48.291 [2024-12-15 07:05:09.905590] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:48.291 [2024-12-15 07:05:09.905596] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:48.291 [2024-12-15 07:05:09.905601] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:48.291 [2024-12-15 07:05:09.905607] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:48.291 [2024-12-15 07:05:09.905613] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:48.291 [2024-12-15 07:05:09.905618] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905628] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:48.291 [2024-12-15 07:05:09.905636] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.291 [2024-12-15 07:05:09.905665] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.291 [2024-12-15 07:05:09.905670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:48.291 [2024-12-15 07:05:09.905679] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.291 [2024-12-15 07:05:09.905693] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.291 [2024-12-15 07:05:09.905706] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.291 [2024-12-15 07:05:09.905720] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.291 [2024-12-15 07:05:09.905732] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:48.291 [2024-12-15 07:05:09.905738] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905748] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:48.291 [2024-12-15 07:05:09.905755] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.291 [2024-12-15 07:05:09.905764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.291 [2024-12-15 07:05:09.905782] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.905788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.905794] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:48.292 [2024-12-15 07:05:09.905800] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.905806] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.905813] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.905823] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.905830] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.905837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.292 [2024-12-15 07:05:09.905859] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.905865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.905913] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.905919] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.905927] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.905935] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.905943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183d00 00:24:48.292 [2024-12-15 07:05:09.905967] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.905972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.905990] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:48.292 [2024-12-15 07:05:09.906003] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.906010] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906018] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.906026] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:48.292 [2024-12-15 07:05:09.906065] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.906072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.906086] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.906092] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906100] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.906108] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183d00 00:24:48.292 [2024-12-15 07:05:09.906144] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.906149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.906158] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.906164] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906171] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.906180] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.906187] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.906193] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.906199] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:48.292 [2024-12-15 07:05:09.906205] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:48.292 [2024-12-15 07:05:09.906211] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:48.292 [2024-12-15 07:05:09.906224] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.292 [2024-12-15 07:05:09.906239] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.292 [2024-12-15 07:05:09.906257] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.906262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.906269] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906275] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.906280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.906286] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906295] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.292 [2024-12-15 07:05:09.906325] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.906330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.906337] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906345] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.292 [2024-12-15 07:05:09.906370] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.906375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.906381] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906390] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.292 [2024-12-15 07:05:09.906414] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.906420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.906426] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906436] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183d00 00:24:48.292 [2024-12-15 07:05:09.906452] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183d00 00:24:48.292 [2024-12-15 07:05:09.906467] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183d00 00:24:48.292 [2024-12-15 07:05:09.906483] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183d00 00:24:48.292 [2024-12-15 07:05:09.906499] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.906504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.906515] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906522] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.906528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.906537] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906544] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.906549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.906556] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:48.292 [2024-12-15 07:05:09.906562] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.292 [2024-12-15 07:05:09.906567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:48.292 [2024-12-15 07:05:09.906577] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:48.293 ===================================================== 00:24:48.293 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.293 ===================================================== 00:24:48.293 Controller Capabilities/Features 00:24:48.293 ================================ 00:24:48.293 Vendor ID: 8086 00:24:48.293 Subsystem Vendor ID: 8086 00:24:48.293 Serial Number: SPDK00000000000001 00:24:48.293 Model Number: SPDK bdev Controller 00:24:48.293 Firmware Version: 24.01.1 00:24:48.293 Recommended Arb Burst: 6 00:24:48.293 IEEE OUI Identifier: e4 d2 5c 00:24:48.293 Multi-path I/O 00:24:48.293 May have multiple subsystem ports: Yes 00:24:48.293 May have multiple controllers: Yes 00:24:48.293 Associated with SR-IOV VF: No 00:24:48.293 Max Data Transfer Size: 131072 00:24:48.293 Max Number of Namespaces: 32 00:24:48.293 Max Number of I/O Queues: 127 00:24:48.293 NVMe Specification Version (VS): 1.3 00:24:48.293 NVMe Specification Version (Identify): 1.3 00:24:48.293 Maximum Queue Entries: 128 00:24:48.293 Contiguous Queues Required: Yes 00:24:48.293 Arbitration Mechanisms Supported 00:24:48.293 Weighted Round Robin: Not Supported 00:24:48.293 Vendor Specific: Not Supported 00:24:48.293 Reset Timeout: 15000 ms 00:24:48.293 Doorbell Stride: 4 bytes 00:24:48.293 NVM Subsystem Reset: Not Supported 00:24:48.293 Command Sets Supported 00:24:48.293 NVM Command Set: Supported 00:24:48.293 Boot Partition: Not Supported 00:24:48.293 Memory Page Size Minimum: 4096 bytes 00:24:48.293 Memory Page Size Maximum: 4096 bytes 00:24:48.293 Persistent Memory Region: Not Supported 00:24:48.293 Optional Asynchronous Events Supported 00:24:48.293 Namespace Attribute Notices: Supported 00:24:48.293 Firmware Activation Notices: Not Supported 00:24:48.293 ANA Change Notices: Not Supported 00:24:48.293 PLE Aggregate Log Change Notices: Not Supported 00:24:48.293 LBA Status Info Alert Notices: Not Supported 00:24:48.293 EGE Aggregate Log Change Notices: Not Supported 00:24:48.293 Normal NVM Subsystem Shutdown event: Not Supported 00:24:48.293 Zone Descriptor Change Notices: Not Supported 00:24:48.293 Discovery Log Change Notices: Not Supported 00:24:48.293 Controller Attributes 00:24:48.293 128-bit Host Identifier: Supported 00:24:48.293 Non-Operational Permissive Mode: Not Supported 00:24:48.293 NVM Sets: Not Supported 00:24:48.293 Read Recovery Levels: Not Supported 00:24:48.293 Endurance Groups: Not Supported 00:24:48.293 Predictable Latency Mode: Not Supported 00:24:48.293 Traffic Based Keep ALive: Not Supported 00:24:48.293 Namespace Granularity: Not Supported 00:24:48.293 SQ Associations: Not Supported 00:24:48.293 UUID List: Not Supported 00:24:48.293 Multi-Domain Subsystem: Not Supported 00:24:48.293 Fixed Capacity Management: Not Supported 00:24:48.293 Variable Capacity Management: Not Supported 00:24:48.293 Delete Endurance Group: Not Supported 00:24:48.293 Delete NVM Set: Not Supported 00:24:48.293 Extended LBA Formats Supported: Not Supported 00:24:48.293 Flexible Data Placement Supported: Not Supported 00:24:48.293 00:24:48.293 Controller Memory Buffer Support 00:24:48.293 ================================ 00:24:48.293 Supported: No 00:24:48.293 00:24:48.293 Persistent Memory Region Support 00:24:48.293 ================================ 00:24:48.293 Supported: No 00:24:48.293 00:24:48.293 Admin Command Set Attributes 00:24:48.293 ============================ 00:24:48.293 Security Send/Receive: Not Supported 00:24:48.293 Format NVM: Not Supported 00:24:48.293 Firmware Activate/Download: Not Supported 00:24:48.293 Namespace Management: Not Supported 00:24:48.293 Device Self-Test: Not Supported 00:24:48.293 Directives: Not Supported 00:24:48.293 NVMe-MI: Not Supported 00:24:48.293 Virtualization Management: Not Supported 00:24:48.293 Doorbell Buffer Config: Not Supported 00:24:48.293 Get LBA Status Capability: Not Supported 00:24:48.293 Command & Feature Lockdown Capability: Not Supported 00:24:48.293 Abort Command Limit: 4 00:24:48.293 Async Event Request Limit: 4 00:24:48.293 Number of Firmware Slots: N/A 00:24:48.293 Firmware Slot 1 Read-Only: N/A 00:24:48.293 Firmware Activation Without Reset: N/A 00:24:48.293 Multiple Update Detection Support: N/A 00:24:48.293 Firmware Update Granularity: No Information Provided 00:24:48.293 Per-Namespace SMART Log: No 00:24:48.293 Asymmetric Namespace Access Log Page: Not Supported 00:24:48.293 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:48.293 Command Effects Log Page: Supported 00:24:48.293 Get Log Page Extended Data: Supported 00:24:48.293 Telemetry Log Pages: Not Supported 00:24:48.293 Persistent Event Log Pages: Not Supported 00:24:48.293 Supported Log Pages Log Page: May Support 00:24:48.293 Commands Supported & Effects Log Page: Not Supported 00:24:48.293 Feature Identifiers & Effects Log Page:May Support 00:24:48.293 NVMe-MI Commands & Effects Log Page: May Support 00:24:48.293 Data Area 4 for Telemetry Log: Not Supported 00:24:48.293 Error Log Page Entries Supported: 128 00:24:48.293 Keep Alive: Supported 00:24:48.293 Keep Alive Granularity: 10000 ms 00:24:48.293 00:24:48.293 NVM Command Set Attributes 00:24:48.293 ========================== 00:24:48.293 Submission Queue Entry Size 00:24:48.293 Max: 64 00:24:48.293 Min: 64 00:24:48.293 Completion Queue Entry Size 00:24:48.293 Max: 16 00:24:48.293 Min: 16 00:24:48.293 Number of Namespaces: 32 00:24:48.293 Compare Command: Supported 00:24:48.293 Write Uncorrectable Command: Not Supported 00:24:48.293 Dataset Management Command: Supported 00:24:48.293 Write Zeroes Command: Supported 00:24:48.293 Set Features Save Field: Not Supported 00:24:48.293 Reservations: Supported 00:24:48.293 Timestamp: Not Supported 00:24:48.293 Copy: Supported 00:24:48.293 Volatile Write Cache: Present 00:24:48.293 Atomic Write Unit (Normal): 1 00:24:48.293 Atomic Write Unit (PFail): 1 00:24:48.293 Atomic Compare & Write Unit: 1 00:24:48.293 Fused Compare & Write: Supported 00:24:48.293 Scatter-Gather List 00:24:48.293 SGL Command Set: Supported 00:24:48.293 SGL Keyed: Supported 00:24:48.293 SGL Bit Bucket Descriptor: Not Supported 00:24:48.293 SGL Metadata Pointer: Not Supported 00:24:48.293 Oversized SGL: Not Supported 00:24:48.293 SGL Metadata Address: Not Supported 00:24:48.293 SGL Offset: Supported 00:24:48.293 Transport SGL Data Block: Not Supported 00:24:48.293 Replay Protected Memory Block: Not Supported 00:24:48.293 00:24:48.293 Firmware Slot Information 00:24:48.293 ========================= 00:24:48.293 Active slot: 1 00:24:48.293 Slot 1 Firmware Revision: 24.01.1 00:24:48.293 00:24:48.293 00:24:48.293 Commands Supported and Effects 00:24:48.293 ============================== 00:24:48.293 Admin Commands 00:24:48.293 -------------- 00:24:48.293 Get Log Page (02h): Supported 00:24:48.293 Identify (06h): Supported 00:24:48.293 Abort (08h): Supported 00:24:48.293 Set Features (09h): Supported 00:24:48.293 Get Features (0Ah): Supported 00:24:48.293 Asynchronous Event Request (0Ch): Supported 00:24:48.293 Keep Alive (18h): Supported 00:24:48.293 I/O Commands 00:24:48.293 ------------ 00:24:48.293 Flush (00h): Supported LBA-Change 00:24:48.293 Write (01h): Supported LBA-Change 00:24:48.293 Read (02h): Supported 00:24:48.293 Compare (05h): Supported 00:24:48.293 Write Zeroes (08h): Supported LBA-Change 00:24:48.293 Dataset Management (09h): Supported LBA-Change 00:24:48.293 Copy (19h): Supported LBA-Change 00:24:48.293 Unknown (79h): Supported LBA-Change 00:24:48.293 Unknown (7Ah): Supported 00:24:48.293 00:24:48.293 Error Log 00:24:48.293 ========= 00:24:48.293 00:24:48.293 Arbitration 00:24:48.293 =========== 00:24:48.293 Arbitration Burst: 1 00:24:48.294 00:24:48.294 Power Management 00:24:48.294 ================ 00:24:48.294 Number of Power States: 1 00:24:48.294 Current Power State: Power State #0 00:24:48.294 Power State #0: 00:24:48.294 Max Power: 0.00 W 00:24:48.294 Non-Operational State: Operational 00:24:48.294 Entry Latency: Not Reported 00:24:48.294 Exit Latency: Not Reported 00:24:48.294 Relative Read Throughput: 0 00:24:48.294 Relative Read Latency: 0 00:24:48.294 Relative Write Throughput: 0 00:24:48.294 Relative Write Latency: 0 00:24:48.294 Idle Power: Not Reported 00:24:48.294 Active Power: Not Reported 00:24:48.294 Non-Operational Permissive Mode: Not Supported 00:24:48.294 00:24:48.294 Health Information 00:24:48.294 ================== 00:24:48.294 Critical Warnings: 00:24:48.294 Available Spare Space: OK 00:24:48.294 Temperature: OK 00:24:48.294 Device Reliability: OK 00:24:48.294 Read Only: No 00:24:48.294 Volatile Memory Backup: OK 00:24:48.294 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:48.294 Temperature Threshol[2024-12-15 07:05:09.906660] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.906668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.906683] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.906689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.906695] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.906720] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:48.294 [2024-12-15 07:05:09.906730] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50153 doesn't match qid 00:24:48.294 [2024-12-15 07:05:09.906743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:5 sqhd:ce28 p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.906749] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50153 doesn't match qid 00:24:48.294 [2024-12-15 07:05:09.906757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:5 sqhd:ce28 p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.906763] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50153 doesn't match qid 00:24:48.294 [2024-12-15 07:05:09.906771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:5 sqhd:ce28 p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.906778] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50153 doesn't match qid 00:24:48.294 [2024-12-15 07:05:09.906785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:5 sqhd:ce28 p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.906794] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.906801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.906818] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.906824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.906832] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.906840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.906846] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.906860] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.906868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.906874] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:48.294 [2024-12-15 07:05:09.906880] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:48.294 [2024-12-15 07:05:09.906886] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.906895] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.906902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.906926] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.906932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.906938] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.906947] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.906954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.906982] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.906988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.906994] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907003] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.907031] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.907036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.907042] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907051] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.907079] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.907084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.907098] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907107] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.907131] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.907137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.907143] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907152] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.907177] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.907183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.907189] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907198] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.907221] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.907227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.907233] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907242] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.907273] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.907279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.907285] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907293] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.907321] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.907326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.907332] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907341] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.907368] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.907373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.907379] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907388] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.294 [2024-12-15 07:05:09.907395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.294 [2024-12-15 07:05:09.907411] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.294 [2024-12-15 07:05:09.907417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:48.294 [2024-12-15 07:05:09.907423] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907433] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.907456] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.907461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.907468] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907476] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.907503] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.907509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.907515] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907523] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.907554] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.907560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.907566] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907574] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.907600] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.907605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.907611] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907620] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.907645] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.907650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.907656] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907665] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.907688] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.907694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.907700] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907710] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.907733] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.907738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.907745] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907753] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.907778] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.907784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.907790] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907798] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.907829] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.907835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.907841] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907849] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.907875] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.907880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.907886] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907895] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.907920] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.907925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.907931] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907940] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.907965] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.907971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.907981] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907990] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.907998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.908015] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.908021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.908027] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.908035] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.908043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.908057] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.908062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.908068] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.908077] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.908084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.908102] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.908107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.908113] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.908122] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.908130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.908145] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.908151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.908157] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.908165] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.908173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.908195] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.908200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.908206] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.908215] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.908222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.295 [2024-12-15 07:05:09.908236] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.295 [2024-12-15 07:05:09.908241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:24:48.295 [2024-12-15 07:05:09.908249] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.908258] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.295 [2024-12-15 07:05:09.908265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908287] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908298] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908307] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908334] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908345] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908354] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908379] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908391] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908399] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908423] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908434] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908443] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908466] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908477] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908486] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908507] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908520] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908529] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908558] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908570] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908578] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908602] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908613] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908622] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908645] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908657] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908665] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908696] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908708] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908716] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908742] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908753] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908762] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908791] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908804] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908812] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908838] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908849] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908858] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908881] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908893] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908901] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908925] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.908930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.908936] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908945] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.908952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.908970] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.912982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.912990] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.912998] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.913006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:24:48.296 [2024-12-15 07:05:09.913026] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:24:48.296 [2024-12-15 07:05:09.913032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0008 p:0 m:0 dnr:0 00:24:48.296 [2024-12-15 07:05:09.913038] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183d00 00:24:48.296 [2024-12-15 07:05:09.913045] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:48.556 d: 0 Kelvin (-273 Celsius) 00:24:48.556 Available Spare: 0% 00:24:48.556 Available Spare Threshold: 0% 00:24:48.556 Life Percentage Used: 0% 00:24:48.556 Data Units Read: 0 00:24:48.556 Data Units Written: 0 00:24:48.556 Host Read Commands: 0 00:24:48.556 Host Write Commands: 0 00:24:48.556 Controller Busy Time: 0 minutes 00:24:48.556 Power Cycles: 0 00:24:48.556 Power On Hours: 0 hours 00:24:48.556 Unsafe Shutdowns: 0 00:24:48.556 Unrecoverable Media Errors: 0 00:24:48.556 Lifetime Error Log Entries: 0 00:24:48.556 Warning Temperature Time: 0 minutes 00:24:48.556 Critical Temperature Time: 0 minutes 00:24:48.556 00:24:48.556 Number of Queues 00:24:48.556 ================ 00:24:48.556 Number of I/O Submission Queues: 127 00:24:48.556 Number of I/O Completion Queues: 127 00:24:48.556 00:24:48.556 Active Namespaces 00:24:48.556 ================= 00:24:48.556 Namespace ID:1 00:24:48.556 Error Recovery Timeout: Unlimited 00:24:48.556 Command Set Identifier: NVM (00h) 00:24:48.556 Deallocate: Supported 00:24:48.556 Deallocated/Unwritten Error: Not Supported 00:24:48.556 Deallocated Read Value: Unknown 00:24:48.556 Deallocate in Write Zeroes: Not Supported 00:24:48.556 Deallocated Guard Field: 0xFFFF 00:24:48.556 Flush: Supported 00:24:48.556 Reservation: Supported 00:24:48.556 Namespace Sharing Capabilities: Multiple Controllers 00:24:48.556 Size (in LBAs): 131072 (0GiB) 00:24:48.556 Capacity (in LBAs): 131072 (0GiB) 00:24:48.556 Utilization (in LBAs): 131072 (0GiB) 00:24:48.556 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:48.556 EUI64: ABCDEF0123456789 00:24:48.556 UUID: 0b9c8ab3-950b-47ed-8552-b8c3281c271d 00:24:48.557 Thin Provisioning: Not Supported 00:24:48.557 Per-NS Atomic Units: Yes 00:24:48.557 Atomic Boundary Size (Normal): 0 00:24:48.557 Atomic Boundary Size (PFail): 0 00:24:48.557 Atomic Boundary Offset: 0 00:24:48.557 Maximum Single Source Range Length: 65535 00:24:48.557 Maximum Copy Length: 65535 00:24:48.557 Maximum Source Range Count: 1 00:24:48.557 NGUID/EUI64 Never Reused: No 00:24:48.557 Namespace Write Protected: No 00:24:48.557 Number of LBA Formats: 1 00:24:48.557 Current LBA Format: LBA Format #00 00:24:48.557 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:48.557 00:24:48.557 07:05:09 -- host/identify.sh@51 -- # sync 00:24:48.557 07:05:09 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.557 07:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.557 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:24:48.557 07:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.557 07:05:09 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:48.557 07:05:09 -- host/identify.sh@56 -- # nvmftestfini 00:24:48.557 07:05:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:48.557 07:05:09 -- nvmf/common.sh@116 -- # sync 00:24:48.557 07:05:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:48.557 07:05:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:48.557 07:05:09 -- nvmf/common.sh@119 -- # set +e 00:24:48.557 07:05:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:48.557 07:05:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:48.557 rmmod nvme_rdma 00:24:48.557 rmmod nvme_fabrics 00:24:48.557 07:05:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:48.557 07:05:10 -- nvmf/common.sh@123 -- # set -e 00:24:48.557 07:05:10 -- nvmf/common.sh@124 -- # return 0 00:24:48.557 07:05:10 -- nvmf/common.sh@477 -- # '[' -n 1452673 ']' 00:24:48.557 07:05:10 -- nvmf/common.sh@478 -- # killprocess 1452673 00:24:48.557 07:05:10 -- common/autotest_common.sh@936 -- # '[' -z 1452673 ']' 00:24:48.557 07:05:10 -- common/autotest_common.sh@940 -- # kill -0 1452673 00:24:48.557 07:05:10 -- common/autotest_common.sh@941 -- # uname 00:24:48.557 07:05:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:48.557 07:05:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1452673 00:24:48.557 07:05:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:48.557 07:05:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:48.557 07:05:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1452673' 00:24:48.557 killing process with pid 1452673 00:24:48.557 07:05:10 -- common/autotest_common.sh@955 -- # kill 1452673 00:24:48.557 [2024-12-15 07:05:10.095717] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:48.557 07:05:10 -- common/autotest_common.sh@960 -- # wait 1452673 00:24:48.816 07:05:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:48.816 07:05:10 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:48.816 00:24:48.816 real 0m8.482s 00:24:48.816 user 0m8.624s 00:24:48.816 sys 0m5.403s 00:24:48.816 07:05:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:48.816 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:24:48.816 ************************************ 00:24:48.816 END TEST nvmf_identify 00:24:48.816 ************************************ 00:24:48.816 07:05:10 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:24:48.816 07:05:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:48.816 07:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:48.816 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:24:48.816 ************************************ 00:24:48.816 START TEST nvmf_perf 00:24:48.816 ************************************ 00:24:48.816 07:05:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:24:49.075 * Looking for test storage... 00:24:49.075 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:49.075 07:05:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:49.075 07:05:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:49.075 07:05:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:49.075 07:05:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:49.075 07:05:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:49.075 07:05:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:49.075 07:05:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:49.075 07:05:10 -- scripts/common.sh@335 -- # IFS=.-: 00:24:49.075 07:05:10 -- scripts/common.sh@335 -- # read -ra ver1 00:24:49.075 07:05:10 -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.075 07:05:10 -- scripts/common.sh@336 -- # read -ra ver2 00:24:49.075 07:05:10 -- scripts/common.sh@337 -- # local 'op=<' 00:24:49.075 07:05:10 -- scripts/common.sh@339 -- # ver1_l=2 00:24:49.075 07:05:10 -- scripts/common.sh@340 -- # ver2_l=1 00:24:49.075 07:05:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:49.075 07:05:10 -- scripts/common.sh@343 -- # case "$op" in 00:24:49.075 07:05:10 -- scripts/common.sh@344 -- # : 1 00:24:49.075 07:05:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:49.075 07:05:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.075 07:05:10 -- scripts/common.sh@364 -- # decimal 1 00:24:49.075 07:05:10 -- scripts/common.sh@352 -- # local d=1 00:24:49.075 07:05:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.075 07:05:10 -- scripts/common.sh@354 -- # echo 1 00:24:49.075 07:05:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:49.075 07:05:10 -- scripts/common.sh@365 -- # decimal 2 00:24:49.075 07:05:10 -- scripts/common.sh@352 -- # local d=2 00:24:49.075 07:05:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.075 07:05:10 -- scripts/common.sh@354 -- # echo 2 00:24:49.075 07:05:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:49.075 07:05:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:49.075 07:05:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:49.075 07:05:10 -- scripts/common.sh@367 -- # return 0 00:24:49.075 07:05:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.075 07:05:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:49.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.075 --rc genhtml_branch_coverage=1 00:24:49.075 --rc genhtml_function_coverage=1 00:24:49.075 --rc genhtml_legend=1 00:24:49.075 --rc geninfo_all_blocks=1 00:24:49.075 --rc geninfo_unexecuted_blocks=1 00:24:49.075 00:24:49.075 ' 00:24:49.075 07:05:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:49.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.075 --rc genhtml_branch_coverage=1 00:24:49.075 --rc genhtml_function_coverage=1 00:24:49.075 --rc genhtml_legend=1 00:24:49.075 --rc geninfo_all_blocks=1 00:24:49.075 --rc geninfo_unexecuted_blocks=1 00:24:49.075 00:24:49.075 ' 00:24:49.075 07:05:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:49.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.075 --rc genhtml_branch_coverage=1 00:24:49.075 --rc genhtml_function_coverage=1 00:24:49.075 --rc genhtml_legend=1 00:24:49.075 --rc geninfo_all_blocks=1 00:24:49.075 --rc geninfo_unexecuted_blocks=1 00:24:49.075 00:24:49.075 ' 00:24:49.075 07:05:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:49.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.075 --rc genhtml_branch_coverage=1 00:24:49.075 --rc genhtml_function_coverage=1 00:24:49.075 --rc genhtml_legend=1 00:24:49.075 --rc geninfo_all_blocks=1 00:24:49.075 --rc geninfo_unexecuted_blocks=1 00:24:49.075 00:24:49.075 ' 00:24:49.075 07:05:10 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.075 07:05:10 -- nvmf/common.sh@7 -- # uname -s 00:24:49.075 07:05:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.076 07:05:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.076 07:05:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.076 07:05:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.076 07:05:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.076 07:05:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.076 07:05:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.076 07:05:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.076 07:05:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.076 07:05:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.076 07:05:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:49.076 07:05:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:49.076 07:05:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.076 07:05:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.076 07:05:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.076 07:05:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:49.076 07:05:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.076 07:05:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.076 07:05:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.076 07:05:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.076 07:05:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.076 07:05:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.076 07:05:10 -- paths/export.sh@5 -- # export PATH 00:24:49.076 07:05:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.076 07:05:10 -- nvmf/common.sh@46 -- # : 0 00:24:49.076 07:05:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:49.076 07:05:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:49.076 07:05:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:49.076 07:05:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.076 07:05:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.076 07:05:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:49.076 07:05:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:49.076 07:05:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:49.076 07:05:10 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:49.076 07:05:10 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:49.076 07:05:10 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:49.076 07:05:10 -- host/perf.sh@17 -- # nvmftestinit 00:24:49.076 07:05:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:49.076 07:05:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.076 07:05:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:49.076 07:05:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:49.076 07:05:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:49.076 07:05:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.076 07:05:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.076 07:05:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.076 07:05:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:49.076 07:05:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:49.076 07:05:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:49.076 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.768 07:05:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:55.768 07:05:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:55.768 07:05:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:55.768 07:05:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:55.768 07:05:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:55.768 07:05:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:55.768 07:05:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:55.768 07:05:16 -- nvmf/common.sh@294 -- # net_devs=() 00:24:55.768 07:05:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:55.768 07:05:16 -- nvmf/common.sh@295 -- # e810=() 00:24:55.768 07:05:16 -- nvmf/common.sh@295 -- # local -ga e810 00:24:55.768 07:05:16 -- nvmf/common.sh@296 -- # x722=() 00:24:55.768 07:05:16 -- nvmf/common.sh@296 -- # local -ga x722 00:24:55.768 07:05:16 -- nvmf/common.sh@297 -- # mlx=() 00:24:55.768 07:05:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:55.768 07:05:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.768 07:05:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.768 07:05:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.768 07:05:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.768 07:05:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.768 07:05:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.768 07:05:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.768 07:05:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.768 07:05:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.768 07:05:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.768 07:05:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.768 07:05:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:55.768 07:05:16 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:55.768 07:05:16 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:55.768 07:05:16 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:55.768 07:05:16 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:55.768 07:05:16 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:55.768 07:05:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:55.768 07:05:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:55.768 07:05:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:55.768 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:55.768 07:05:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:55.768 07:05:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:55.768 07:05:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:55.768 07:05:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:55.768 07:05:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:55.768 07:05:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:55.768 07:05:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:55.769 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:55.769 07:05:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:55.769 07:05:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:55.769 07:05:16 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.769 07:05:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:55.769 07:05:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.769 07:05:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:55.769 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:55.769 07:05:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.769 07:05:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.769 07:05:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:55.769 07:05:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.769 07:05:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:55.769 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:55.769 07:05:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.769 07:05:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:55.769 07:05:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:55.769 07:05:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:55.769 07:05:16 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:55.769 07:05:16 -- nvmf/common.sh@57 -- # uname 00:24:55.769 07:05:16 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:55.769 07:05:16 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:55.769 07:05:16 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:55.769 07:05:16 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:55.769 07:05:16 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:55.769 07:05:16 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:55.769 07:05:16 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:55.769 07:05:16 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:55.769 07:05:16 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:55.769 07:05:16 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:55.769 07:05:16 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:55.769 07:05:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:55.769 07:05:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:55.769 07:05:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:55.769 07:05:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:55.769 07:05:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:55.769 07:05:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:55.769 07:05:16 -- nvmf/common.sh@104 -- # continue 2 00:24:55.769 07:05:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:55.769 07:05:16 -- nvmf/common.sh@104 -- # continue 2 00:24:55.769 07:05:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:55.769 07:05:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:55.769 07:05:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:55.769 07:05:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:55.769 07:05:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:55.769 07:05:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:55.769 07:05:16 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:55.769 07:05:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:55.769 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:55.769 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:55.769 altname enp217s0f0np0 00:24:55.769 altname ens818f0np0 00:24:55.769 inet 192.168.100.8/24 scope global mlx_0_0 00:24:55.769 valid_lft forever preferred_lft forever 00:24:55.769 07:05:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:55.769 07:05:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:55.769 07:05:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:55.769 07:05:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:55.769 07:05:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:55.769 07:05:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:55.769 07:05:16 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:55.769 07:05:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:55.769 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:55.769 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:55.769 altname enp217s0f1np1 00:24:55.769 altname ens818f1np1 00:24:55.769 inet 192.168.100.9/24 scope global mlx_0_1 00:24:55.769 valid_lft forever preferred_lft forever 00:24:55.769 07:05:16 -- nvmf/common.sh@410 -- # return 0 00:24:55.769 07:05:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:55.769 07:05:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:55.769 07:05:16 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:55.769 07:05:16 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:55.769 07:05:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:55.769 07:05:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:55.769 07:05:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:55.769 07:05:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:55.769 07:05:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:55.769 07:05:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:55.769 07:05:16 -- nvmf/common.sh@104 -- # continue 2 00:24:55.769 07:05:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:55.769 07:05:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:55.769 07:05:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:55.769 07:05:16 -- nvmf/common.sh@104 -- # continue 2 00:24:55.769 07:05:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:55.769 07:05:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:55.769 07:05:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:55.769 07:05:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:55.769 07:05:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:55.769 07:05:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:55.769 07:05:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:55.769 07:05:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:55.769 07:05:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:55.769 07:05:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:55.769 07:05:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:55.769 07:05:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:55.769 07:05:16 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:55.769 192.168.100.9' 00:24:55.769 07:05:17 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:55.769 192.168.100.9' 00:24:55.769 07:05:17 -- nvmf/common.sh@445 -- # head -n 1 00:24:55.769 07:05:17 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:55.769 07:05:17 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:55.769 192.168.100.9' 00:24:55.769 07:05:17 -- nvmf/common.sh@446 -- # tail -n +2 00:24:55.769 07:05:17 -- nvmf/common.sh@446 -- # head -n 1 00:24:55.769 07:05:17 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:55.769 07:05:17 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:55.769 07:05:17 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:55.769 07:05:17 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:55.769 07:05:17 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:55.769 07:05:17 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:55.769 07:05:17 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:55.769 07:05:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:55.769 07:05:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:55.769 07:05:17 -- common/autotest_common.sh@10 -- # set +x 00:24:55.769 07:05:17 -- nvmf/common.sh@469 -- # nvmfpid=1456398 00:24:55.769 07:05:17 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:55.769 07:05:17 -- nvmf/common.sh@470 -- # waitforlisten 1456398 00:24:55.769 07:05:17 -- common/autotest_common.sh@829 -- # '[' -z 1456398 ']' 00:24:55.769 07:05:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.769 07:05:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.769 07:05:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.769 07:05:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.769 07:05:17 -- common/autotest_common.sh@10 -- # set +x 00:24:55.769 [2024-12-15 07:05:17.082515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:55.769 [2024-12-15 07:05:17.082563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.769 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.769 [2024-12-15 07:05:17.150735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.769 [2024-12-15 07:05:17.190389] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:55.769 [2024-12-15 07:05:17.190517] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.770 [2024-12-15 07:05:17.190527] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.770 [2024-12-15 07:05:17.190536] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.770 [2024-12-15 07:05:17.190576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.770 [2024-12-15 07:05:17.190602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.770 [2024-12-15 07:05:17.190666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.770 [2024-12-15 07:05:17.190667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.338 07:05:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.338 07:05:17 -- common/autotest_common.sh@862 -- # return 0 00:24:56.338 07:05:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:56.338 07:05:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:56.338 07:05:17 -- common/autotest_common.sh@10 -- # set +x 00:24:56.338 07:05:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.338 07:05:17 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:56.338 07:05:17 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:59.625 07:05:21 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:59.625 07:05:21 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:59.625 07:05:21 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:24:59.626 07:05:21 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:59.884 07:05:21 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:59.884 07:05:21 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:24:59.884 07:05:21 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:59.884 07:05:21 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:24:59.884 07:05:21 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:25:00.143 [2024-12-15 07:05:21.569717] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:25:00.143 [2024-12-15 07:05:21.590241] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24919c0/0x249f710) succeed. 00:25:00.143 [2024-12-15 07:05:21.599633] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2492f60/0x24e0db0) succeed. 00:25:00.144 07:05:21 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.402 07:05:21 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:00.402 07:05:21 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.661 07:05:22 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:00.661 07:05:22 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:00.661 07:05:22 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:00.920 [2024-12-15 07:05:22.433926] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:00.920 07:05:22 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:01.179 07:05:22 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:25:01.179 07:05:22 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:01.179 07:05:22 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:01.179 07:05:22 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:02.556 Initializing NVMe Controllers 00:25:02.556 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:25:02.556 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:25:02.556 Initialization complete. Launching workers. 00:25:02.556 ======================================================== 00:25:02.556 Latency(us) 00:25:02.556 Device Information : IOPS MiB/s Average min max 00:25:02.556 PCIE (0000:d8:00.0) NSID 1 from core 0: 102925.00 402.05 311.07 24.35 5204.90 00:25:02.556 ======================================================== 00:25:02.556 Total : 102925.00 402.05 311.07 24.35 5204.90 00:25:02.556 00:25:02.556 07:05:23 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:02.556 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.845 Initializing NVMe Controllers 00:25:05.845 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.845 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:05.845 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:05.845 Initialization complete. Launching workers. 00:25:05.845 ======================================================== 00:25:05.845 Latency(us) 00:25:05.845 Device Information : IOPS MiB/s Average min max 00:25:05.845 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6826.99 26.67 146.28 45.35 5013.45 00:25:05.845 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5273.99 20.60 189.42 70.00 5088.67 00:25:05.845 ======================================================== 00:25:05.845 Total : 12100.98 47.27 165.08 45.35 5088.67 00:25:05.845 00:25:05.845 07:05:27 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:05.845 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.131 Initializing NVMe Controllers 00:25:09.131 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.131 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:09.131 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:09.131 Initialization complete. Launching workers. 00:25:09.131 ======================================================== 00:25:09.131 Latency(us) 00:25:09.131 Device Information : IOPS MiB/s Average min max 00:25:09.131 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19478.98 76.09 1642.98 457.88 7075.94 00:25:09.131 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7993.82 5871.02 10150.03 00:25:09.131 ======================================================== 00:25:09.131 Total : 23510.98 91.84 2732.12 457.88 10150.03 00:25:09.131 00:25:09.131 07:05:30 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:25:09.131 07:05:30 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:09.131 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.396 Initializing NVMe Controllers 00:25:14.396 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:14.396 Controller IO queue size 128, less than required. 00:25:14.396 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:14.396 Controller IO queue size 128, less than required. 00:25:14.396 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:14.396 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:14.396 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:14.396 Initialization complete. Launching workers. 00:25:14.396 ======================================================== 00:25:14.396 Latency(us) 00:25:14.396 Device Information : IOPS MiB/s Average min max 00:25:14.396 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4079.00 1019.75 31484.38 15298.36 70292.01 00:25:14.396 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4136.50 1034.12 30802.56 14513.60 53952.68 00:25:14.396 ======================================================== 00:25:14.396 Total : 8215.50 2053.88 31141.08 14513.60 70292.01 00:25:14.396 00:25:14.396 07:05:34 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:25:14.396 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.396 No valid NVMe controllers or AIO or URING devices found 00:25:14.396 Initializing NVMe Controllers 00:25:14.396 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:14.396 Controller IO queue size 128, less than required. 00:25:14.396 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:14.396 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:14.396 Controller IO queue size 128, less than required. 00:25:14.396 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:14.396 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:14.396 WARNING: Some requested NVMe devices were skipped 00:25:14.396 07:05:35 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:25:14.396 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.581 Initializing NVMe Controllers 00:25:18.581 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:18.581 Controller IO queue size 128, less than required. 00:25:18.581 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:18.581 Controller IO queue size 128, less than required. 00:25:18.581 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:18.581 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:18.581 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:18.581 Initialization complete. Launching workers. 00:25:18.581 00:25:18.581 ==================== 00:25:18.581 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:18.581 RDMA transport: 00:25:18.581 dev name: mlx5_0 00:25:18.581 polls: 420110 00:25:18.581 idle_polls: 415852 00:25:18.581 completions: 46269 00:25:18.581 queued_requests: 1 00:25:18.581 total_send_wrs: 23198 00:25:18.581 send_doorbell_updates: 4065 00:25:18.581 total_recv_wrs: 23198 00:25:18.581 recv_doorbell_updates: 4065 00:25:18.581 --------------------------------- 00:25:18.581 00:25:18.581 ==================== 00:25:18.581 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:18.581 RDMA transport: 00:25:18.581 dev name: mlx5_0 00:25:18.581 polls: 421846 00:25:18.581 idle_polls: 421557 00:25:18.581 completions: 20483 00:25:18.581 queued_requests: 1 00:25:18.581 total_send_wrs: 10305 00:25:18.581 send_doorbell_updates: 262 00:25:18.581 total_recv_wrs: 10305 00:25:18.581 recv_doorbell_updates: 262 00:25:18.581 --------------------------------- 00:25:18.581 ======================================================== 00:25:18.581 Latency(us) 00:25:18.581 Device Information : IOPS MiB/s Average min max 00:25:18.581 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5831.50 1457.88 22018.70 8670.04 51544.96 00:25:18.581 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2608.00 652.00 49200.94 24863.38 74337.06 00:25:18.581 ======================================================== 00:25:18.581 Total : 8439.50 2109.88 30418.64 8670.04 74337.06 00:25:18.581 00:25:18.581 07:05:39 -- host/perf.sh@66 -- # sync 00:25:18.581 07:05:39 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.581 07:05:39 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:18.581 07:05:39 -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:25:18.581 07:05:39 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:25.140 07:05:45 -- host/perf.sh@72 -- # ls_guid=7c7c2643-796a-4010-95af-3f813197b735 00:25:25.140 07:05:45 -- host/perf.sh@73 -- # get_lvs_free_mb 7c7c2643-796a-4010-95af-3f813197b735 00:25:25.140 07:05:45 -- common/autotest_common.sh@1353 -- # local lvs_uuid=7c7c2643-796a-4010-95af-3f813197b735 00:25:25.140 07:05:45 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:25.140 07:05:45 -- common/autotest_common.sh@1355 -- # local fc 00:25:25.140 07:05:45 -- common/autotest_common.sh@1356 -- # local cs 00:25:25.140 07:05:45 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:25.140 07:05:46 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:25.140 { 00:25:25.140 "uuid": "7c7c2643-796a-4010-95af-3f813197b735", 00:25:25.140 "name": "lvs_0", 00:25:25.140 "base_bdev": "Nvme0n1", 00:25:25.140 "total_data_clusters": 476466, 00:25:25.140 "free_clusters": 476466, 00:25:25.140 "block_size": 512, 00:25:25.140 "cluster_size": 4194304 00:25:25.140 } 00:25:25.140 ]' 00:25:25.140 07:05:46 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="7c7c2643-796a-4010-95af-3f813197b735") .free_clusters' 00:25:25.140 07:05:46 -- common/autotest_common.sh@1358 -- # fc=476466 00:25:25.140 07:05:46 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="7c7c2643-796a-4010-95af-3f813197b735") .cluster_size' 00:25:25.140 07:05:46 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:25.140 07:05:46 -- common/autotest_common.sh@1362 -- # free_mb=1905864 00:25:25.140 07:05:46 -- common/autotest_common.sh@1363 -- # echo 1905864 00:25:25.140 1905864 00:25:25.140 07:05:46 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:25:25.140 07:05:46 -- host/perf.sh@78 -- # free_mb=20480 00:25:25.140 07:05:46 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7c7c2643-796a-4010-95af-3f813197b735 lbd_0 20480 00:25:25.140 07:05:46 -- host/perf.sh@80 -- # lb_guid=f6c43bdf-800c-4cf3-b461-cedd9ea6cc5f 00:25:25.140 07:05:46 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore f6c43bdf-800c-4cf3-b461-cedd9ea6cc5f lvs_n_0 00:25:27.040 07:05:48 -- host/perf.sh@83 -- # ls_nested_guid=6997499f-9aa7-4efc-94d4-e488ba06879c 00:25:27.040 07:05:48 -- host/perf.sh@84 -- # get_lvs_free_mb 6997499f-9aa7-4efc-94d4-e488ba06879c 00:25:27.040 07:05:48 -- common/autotest_common.sh@1353 -- # local lvs_uuid=6997499f-9aa7-4efc-94d4-e488ba06879c 00:25:27.040 07:05:48 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:27.040 07:05:48 -- common/autotest_common.sh@1355 -- # local fc 00:25:27.040 07:05:48 -- common/autotest_common.sh@1356 -- # local cs 00:25:27.040 07:05:48 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:27.298 07:05:48 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:27.298 { 00:25:27.298 "uuid": "7c7c2643-796a-4010-95af-3f813197b735", 00:25:27.298 "name": "lvs_0", 00:25:27.298 "base_bdev": "Nvme0n1", 00:25:27.298 "total_data_clusters": 476466, 00:25:27.298 "free_clusters": 471346, 00:25:27.298 "block_size": 512, 00:25:27.298 "cluster_size": 4194304 00:25:27.298 }, 00:25:27.298 { 00:25:27.298 "uuid": "6997499f-9aa7-4efc-94d4-e488ba06879c", 00:25:27.298 "name": "lvs_n_0", 00:25:27.298 "base_bdev": "f6c43bdf-800c-4cf3-b461-cedd9ea6cc5f", 00:25:27.298 "total_data_clusters": 5114, 00:25:27.298 "free_clusters": 5114, 00:25:27.298 "block_size": 512, 00:25:27.298 "cluster_size": 4194304 00:25:27.298 } 00:25:27.298 ]' 00:25:27.298 07:05:48 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="6997499f-9aa7-4efc-94d4-e488ba06879c") .free_clusters' 00:25:27.298 07:05:48 -- common/autotest_common.sh@1358 -- # fc=5114 00:25:27.298 07:05:48 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="6997499f-9aa7-4efc-94d4-e488ba06879c") .cluster_size' 00:25:27.557 07:05:48 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:27.557 07:05:48 -- common/autotest_common.sh@1362 -- # free_mb=20456 00:25:27.557 07:05:48 -- common/autotest_common.sh@1363 -- # echo 20456 00:25:27.557 20456 00:25:27.557 07:05:48 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:27.557 07:05:48 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6997499f-9aa7-4efc-94d4-e488ba06879c lbd_nest_0 20456 00:25:27.557 07:05:49 -- host/perf.sh@88 -- # lb_nested_guid=f97a3e28-41b4-4a62-a56d-35abfd890c50 00:25:27.557 07:05:49 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:27.815 07:05:49 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:27.815 07:05:49 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f97a3e28-41b4-4a62-a56d-35abfd890c50 00:25:28.073 07:05:49 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:28.073 07:05:49 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:28.073 07:05:49 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:28.073 07:05:49 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:28.073 07:05:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:28.073 07:05:49 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:28.331 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.529 Initializing NVMe Controllers 00:25:40.529 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:40.529 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:40.529 Initialization complete. Launching workers. 00:25:40.529 ======================================================== 00:25:40.529 Latency(us) 00:25:40.529 Device Information : IOPS MiB/s Average min max 00:25:40.529 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5979.75 2.92 166.96 66.73 8057.11 00:25:40.529 ======================================================== 00:25:40.529 Total : 5979.75 2.92 166.96 66.73 8057.11 00:25:40.529 00:25:40.529 07:06:01 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:40.529 07:06:01 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:40.529 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.723 Initializing NVMe Controllers 00:25:52.723 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:52.723 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:52.723 Initialization complete. Launching workers. 00:25:52.723 ======================================================== 00:25:52.723 Latency(us) 00:25:52.723 Device Information : IOPS MiB/s Average min max 00:25:52.723 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2678.41 334.80 373.01 155.78 8121.67 00:25:52.723 ======================================================== 00:25:52.723 Total : 2678.41 334.80 373.01 155.78 8121.67 00:25:52.723 00:25:52.723 07:06:12 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:52.723 07:06:12 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:52.723 07:06:12 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:52.723 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.688 Initializing NVMe Controllers 00:26:02.688 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:02.688 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:02.688 Initialization complete. Launching workers. 00:26:02.688 ======================================================== 00:26:02.688 Latency(us) 00:26:02.688 Device Information : IOPS MiB/s Average min max 00:26:02.688 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12257.45 5.99 2610.63 885.85 9069.80 00:26:02.688 ======================================================== 00:26:02.688 Total : 12257.45 5.99 2610.63 885.85 9069.80 00:26:02.688 00:26:02.688 07:06:23 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:02.688 07:06:23 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:02.688 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.889 Initializing NVMe Controllers 00:26:14.889 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.889 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:14.889 Initialization complete. Launching workers. 00:26:14.889 ======================================================== 00:26:14.889 Latency(us) 00:26:14.889 Device Information : IOPS MiB/s Average min max 00:26:14.889 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3975.27 496.91 8049.44 3925.94 16035.41 00:26:14.889 ======================================================== 00:26:14.889 Total : 3975.27 496.91 8049.44 3925.94 16035.41 00:26:14.889 00:26:14.889 07:06:35 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:14.889 07:06:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:14.889 07:06:35 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:14.889 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.087 Initializing NVMe Controllers 00:26:27.087 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:27.087 Controller IO queue size 128, less than required. 00:26:27.087 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:27.087 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:27.087 Initialization complete. Launching workers. 00:26:27.087 ======================================================== 00:26:27.087 Latency(us) 00:26:27.087 Device Information : IOPS MiB/s Average min max 00:26:27.087 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19834.10 9.68 6455.70 1845.89 16710.01 00:26:27.087 ======================================================== 00:26:27.087 Total : 19834.10 9.68 6455.70 1845.89 16710.01 00:26:27.087 00:26:27.087 07:06:46 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:27.087 07:06:46 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:27.087 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.057 Initializing NVMe Controllers 00:26:37.057 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:37.057 Controller IO queue size 128, less than required. 00:26:37.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:37.057 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:37.057 Initialization complete. Launching workers. 00:26:37.057 ======================================================== 00:26:37.057 Latency(us) 00:26:37.057 Device Information : IOPS MiB/s Average min max 00:26:37.057 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11431.02 1428.88 11201.95 3373.65 23780.26 00:26:37.057 ======================================================== 00:26:37.057 Total : 11431.02 1428.88 11201.95 3373.65 23780.26 00:26:37.057 00:26:37.057 07:06:57 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:37.057 07:06:58 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f97a3e28-41b4-4a62-a56d-35abfd890c50 00:26:37.057 07:06:58 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:37.315 07:06:58 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f6c43bdf-800c-4cf3-b461-cedd9ea6cc5f 00:26:37.573 07:06:59 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:37.831 07:06:59 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:37.831 07:06:59 -- host/perf.sh@114 -- # nvmftestfini 00:26:37.831 07:06:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:37.831 07:06:59 -- nvmf/common.sh@116 -- # sync 00:26:37.831 07:06:59 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:26:37.831 07:06:59 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:26:37.831 07:06:59 -- nvmf/common.sh@119 -- # set +e 00:26:37.831 07:06:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:37.831 07:06:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:26:37.831 rmmod nvme_rdma 00:26:37.831 rmmod nvme_fabrics 00:26:37.831 07:06:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:37.831 07:06:59 -- nvmf/common.sh@123 -- # set -e 00:26:37.831 07:06:59 -- nvmf/common.sh@124 -- # return 0 00:26:37.831 07:06:59 -- nvmf/common.sh@477 -- # '[' -n 1456398 ']' 00:26:37.831 07:06:59 -- nvmf/common.sh@478 -- # killprocess 1456398 00:26:37.831 07:06:59 -- common/autotest_common.sh@936 -- # '[' -z 1456398 ']' 00:26:37.831 07:06:59 -- common/autotest_common.sh@940 -- # kill -0 1456398 00:26:37.831 07:06:59 -- common/autotest_common.sh@941 -- # uname 00:26:37.831 07:06:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:37.831 07:06:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1456398 00:26:37.831 07:06:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:37.831 07:06:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:37.831 07:06:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1456398' 00:26:37.831 killing process with pid 1456398 00:26:37.831 07:06:59 -- common/autotest_common.sh@955 -- # kill 1456398 00:26:37.831 07:06:59 -- common/autotest_common.sh@960 -- # wait 1456398 00:26:40.412 07:07:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:40.412 07:07:01 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:26:40.412 00:26:40.412 real 1m51.531s 00:26:40.412 user 7m2.379s 00:26:40.412 sys 0m6.882s 00:26:40.412 07:07:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:40.412 07:07:01 -- common/autotest_common.sh@10 -- # set +x 00:26:40.412 ************************************ 00:26:40.412 END TEST nvmf_perf 00:26:40.412 ************************************ 00:26:40.412 07:07:01 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:26:40.412 07:07:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:40.412 07:07:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:40.412 07:07:01 -- common/autotest_common.sh@10 -- # set +x 00:26:40.412 ************************************ 00:26:40.412 START TEST nvmf_fio_host 00:26:40.412 ************************************ 00:26:40.412 07:07:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:26:40.703 * Looking for test storage... 00:26:40.703 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:40.703 07:07:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:40.703 07:07:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:40.703 07:07:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:40.703 07:07:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:40.703 07:07:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:40.703 07:07:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:40.703 07:07:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:40.703 07:07:02 -- scripts/common.sh@335 -- # IFS=.-: 00:26:40.703 07:07:02 -- scripts/common.sh@335 -- # read -ra ver1 00:26:40.703 07:07:02 -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.703 07:07:02 -- scripts/common.sh@336 -- # read -ra ver2 00:26:40.703 07:07:02 -- scripts/common.sh@337 -- # local 'op=<' 00:26:40.703 07:07:02 -- scripts/common.sh@339 -- # ver1_l=2 00:26:40.703 07:07:02 -- scripts/common.sh@340 -- # ver2_l=1 00:26:40.703 07:07:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:40.703 07:07:02 -- scripts/common.sh@343 -- # case "$op" in 00:26:40.703 07:07:02 -- scripts/common.sh@344 -- # : 1 00:26:40.703 07:07:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:40.703 07:07:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.703 07:07:02 -- scripts/common.sh@364 -- # decimal 1 00:26:40.703 07:07:02 -- scripts/common.sh@352 -- # local d=1 00:26:40.703 07:07:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.703 07:07:02 -- scripts/common.sh@354 -- # echo 1 00:26:40.703 07:07:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:40.703 07:07:02 -- scripts/common.sh@365 -- # decimal 2 00:26:40.703 07:07:02 -- scripts/common.sh@352 -- # local d=2 00:26:40.703 07:07:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.703 07:07:02 -- scripts/common.sh@354 -- # echo 2 00:26:40.703 07:07:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:40.703 07:07:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:40.703 07:07:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:40.703 07:07:02 -- scripts/common.sh@367 -- # return 0 00:26:40.703 07:07:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.703 07:07:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:40.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.703 --rc genhtml_branch_coverage=1 00:26:40.703 --rc genhtml_function_coverage=1 00:26:40.703 --rc genhtml_legend=1 00:26:40.703 --rc geninfo_all_blocks=1 00:26:40.703 --rc geninfo_unexecuted_blocks=1 00:26:40.703 00:26:40.703 ' 00:26:40.703 07:07:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:40.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.703 --rc genhtml_branch_coverage=1 00:26:40.703 --rc genhtml_function_coverage=1 00:26:40.703 --rc genhtml_legend=1 00:26:40.703 --rc geninfo_all_blocks=1 00:26:40.703 --rc geninfo_unexecuted_blocks=1 00:26:40.703 00:26:40.703 ' 00:26:40.703 07:07:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:40.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.703 --rc genhtml_branch_coverage=1 00:26:40.703 --rc genhtml_function_coverage=1 00:26:40.703 --rc genhtml_legend=1 00:26:40.703 --rc geninfo_all_blocks=1 00:26:40.703 --rc geninfo_unexecuted_blocks=1 00:26:40.703 00:26:40.703 ' 00:26:40.703 07:07:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:40.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.703 --rc genhtml_branch_coverage=1 00:26:40.703 --rc genhtml_function_coverage=1 00:26:40.703 --rc genhtml_legend=1 00:26:40.703 --rc geninfo_all_blocks=1 00:26:40.703 --rc geninfo_unexecuted_blocks=1 00:26:40.703 00:26:40.703 ' 00:26:40.703 07:07:02 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:40.703 07:07:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.703 07:07:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.703 07:07:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.703 07:07:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.703 07:07:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.703 07:07:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.703 07:07:02 -- paths/export.sh@5 -- # export PATH 00:26:40.703 07:07:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.703 07:07:02 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.703 07:07:02 -- nvmf/common.sh@7 -- # uname -s 00:26:40.703 07:07:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.703 07:07:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.703 07:07:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.703 07:07:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.703 07:07:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.703 07:07:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.703 07:07:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.703 07:07:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.703 07:07:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.703 07:07:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.703 07:07:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:40.703 07:07:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:40.703 07:07:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.703 07:07:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.703 07:07:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.703 07:07:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:40.703 07:07:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.703 07:07:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.703 07:07:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.704 07:07:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.704 07:07:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.704 07:07:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.704 07:07:02 -- paths/export.sh@5 -- # export PATH 00:26:40.704 07:07:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.704 07:07:02 -- nvmf/common.sh@46 -- # : 0 00:26:40.704 07:07:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:40.704 07:07:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:40.704 07:07:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:40.704 07:07:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.704 07:07:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.704 07:07:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:40.704 07:07:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:40.704 07:07:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:40.704 07:07:02 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:26:40.704 07:07:02 -- host/fio.sh@14 -- # nvmftestinit 00:26:40.704 07:07:02 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:26:40.704 07:07:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.704 07:07:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:40.704 07:07:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:40.704 07:07:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:40.704 07:07:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.704 07:07:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.704 07:07:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.704 07:07:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:40.704 07:07:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:40.704 07:07:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:40.704 07:07:02 -- common/autotest_common.sh@10 -- # set +x 00:26:47.268 07:07:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:47.268 07:07:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:47.268 07:07:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:47.268 07:07:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:47.268 07:07:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:47.268 07:07:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:47.268 07:07:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:47.268 07:07:08 -- nvmf/common.sh@294 -- # net_devs=() 00:26:47.268 07:07:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:47.268 07:07:08 -- nvmf/common.sh@295 -- # e810=() 00:26:47.268 07:07:08 -- nvmf/common.sh@295 -- # local -ga e810 00:26:47.268 07:07:08 -- nvmf/common.sh@296 -- # x722=() 00:26:47.268 07:07:08 -- nvmf/common.sh@296 -- # local -ga x722 00:26:47.268 07:07:08 -- nvmf/common.sh@297 -- # mlx=() 00:26:47.268 07:07:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:47.268 07:07:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.268 07:07:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.268 07:07:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.268 07:07:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.268 07:07:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.268 07:07:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.268 07:07:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.268 07:07:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.268 07:07:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.268 07:07:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.268 07:07:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.268 07:07:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:47.268 07:07:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:26:47.268 07:07:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:26:47.268 07:07:08 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:26:47.268 07:07:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:47.268 07:07:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:47.268 07:07:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:47.268 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:47.268 07:07:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:26:47.268 07:07:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:47.268 07:07:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:47.268 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:47.268 07:07:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:26:47.268 07:07:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:47.268 07:07:08 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:47.268 07:07:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.268 07:07:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:47.268 07:07:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.268 07:07:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:47.268 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:47.268 07:07:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.268 07:07:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:47.268 07:07:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.268 07:07:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:47.268 07:07:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.268 07:07:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:47.268 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:47.268 07:07:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.268 07:07:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:47.268 07:07:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:47.268 07:07:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:26:47.268 07:07:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:26:47.268 07:07:08 -- nvmf/common.sh@57 -- # uname 00:26:47.268 07:07:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:26:47.268 07:07:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:26:47.268 07:07:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:26:47.268 07:07:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:26:47.268 07:07:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:26:47.268 07:07:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:26:47.268 07:07:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:26:47.268 07:07:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:26:47.268 07:07:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:26:47.268 07:07:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:47.268 07:07:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:26:47.268 07:07:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:47.268 07:07:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:26:47.268 07:07:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:26:47.268 07:07:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:47.268 07:07:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:26:47.268 07:07:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:47.268 07:07:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:47.268 07:07:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:26:47.268 07:07:08 -- nvmf/common.sh@104 -- # continue 2 00:26:47.268 07:07:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:47.268 07:07:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:47.268 07:07:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:47.268 07:07:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:26:47.268 07:07:08 -- nvmf/common.sh@104 -- # continue 2 00:26:47.268 07:07:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:26:47.268 07:07:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:26:47.268 07:07:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:26:47.268 07:07:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:26:47.268 07:07:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:47.268 07:07:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:47.268 07:07:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:26:47.268 07:07:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:26:47.268 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:47.268 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:47.268 altname enp217s0f0np0 00:26:47.268 altname ens818f0np0 00:26:47.268 inet 192.168.100.8/24 scope global mlx_0_0 00:26:47.268 valid_lft forever preferred_lft forever 00:26:47.268 07:07:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:26:47.268 07:07:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:26:47.268 07:07:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:26:47.268 07:07:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:26:47.268 07:07:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:47.268 07:07:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:47.268 07:07:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:26:47.268 07:07:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:26:47.268 07:07:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:26:47.269 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:47.269 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:47.269 altname enp217s0f1np1 00:26:47.269 altname ens818f1np1 00:26:47.269 inet 192.168.100.9/24 scope global mlx_0_1 00:26:47.269 valid_lft forever preferred_lft forever 00:26:47.269 07:07:08 -- nvmf/common.sh@410 -- # return 0 00:26:47.269 07:07:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:47.269 07:07:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:47.269 07:07:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:26:47.269 07:07:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:26:47.269 07:07:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:26:47.269 07:07:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:47.269 07:07:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:26:47.269 07:07:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:26:47.269 07:07:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:47.269 07:07:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:26:47.269 07:07:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:47.269 07:07:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:47.269 07:07:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:47.269 07:07:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:26:47.269 07:07:08 -- nvmf/common.sh@104 -- # continue 2 00:26:47.269 07:07:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:26:47.269 07:07:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:47.269 07:07:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:47.269 07:07:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:47.269 07:07:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:47.269 07:07:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:26:47.269 07:07:08 -- nvmf/common.sh@104 -- # continue 2 00:26:47.269 07:07:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:26:47.269 07:07:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:26:47.269 07:07:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:26:47.269 07:07:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:47.269 07:07:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:26:47.269 07:07:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:47.269 07:07:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:26:47.269 07:07:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:26:47.269 07:07:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:26:47.269 07:07:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:26:47.269 07:07:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:26:47.269 07:07:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:26:47.269 07:07:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:26:47.269 192.168.100.9' 00:26:47.269 07:07:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:26:47.269 192.168.100.9' 00:26:47.269 07:07:08 -- nvmf/common.sh@445 -- # head -n 1 00:26:47.269 07:07:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:47.269 07:07:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:26:47.269 192.168.100.9' 00:26:47.269 07:07:08 -- nvmf/common.sh@446 -- # tail -n +2 00:26:47.269 07:07:08 -- nvmf/common.sh@446 -- # head -n 1 00:26:47.269 07:07:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:47.269 07:07:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:26:47.269 07:07:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:47.269 07:07:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:26:47.269 07:07:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:26:47.269 07:07:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:26:47.269 07:07:08 -- host/fio.sh@16 -- # [[ y != y ]] 00:26:47.269 07:07:08 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:47.269 07:07:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:47.269 07:07:08 -- common/autotest_common.sh@10 -- # set +x 00:26:47.269 07:07:08 -- host/fio.sh@24 -- # nvmfpid=1477765 00:26:47.269 07:07:08 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:47.269 07:07:08 -- host/fio.sh@28 -- # waitforlisten 1477765 00:26:47.269 07:07:08 -- common/autotest_common.sh@829 -- # '[' -z 1477765 ']' 00:26:47.269 07:07:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.269 07:07:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.269 07:07:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.269 07:07:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.269 07:07:08 -- common/autotest_common.sh@10 -- # set +x 00:26:47.269 07:07:08 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:47.269 [2024-12-15 07:07:08.722807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:47.269 [2024-12-15 07:07:08.722855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.269 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.269 [2024-12-15 07:07:08.793814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:47.269 [2024-12-15 07:07:08.832430] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:47.269 [2024-12-15 07:07:08.832543] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.269 [2024-12-15 07:07:08.832554] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.269 [2024-12-15 07:07:08.832563] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.269 [2024-12-15 07:07:08.832611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.269 [2024-12-15 07:07:08.832717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:47.269 [2024-12-15 07:07:08.832801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:47.269 [2024-12-15 07:07:08.832803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.203 07:07:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.203 07:07:09 -- common/autotest_common.sh@862 -- # return 0 00:26:48.203 07:07:09 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:48.203 [2024-12-15 07:07:09.723051] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1be80d0/0x1bec5a0) succeed. 00:26:48.203 [2024-12-15 07:07:09.732258] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1be9670/0x1c2dc40) succeed. 00:26:48.461 07:07:09 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:48.461 07:07:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:48.461 07:07:09 -- common/autotest_common.sh@10 -- # set +x 00:26:48.461 07:07:09 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:48.461 Malloc1 00:26:48.719 07:07:10 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:48.719 07:07:10 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:48.978 07:07:10 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:49.237 [2024-12-15 07:07:10.677163] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:49.237 07:07:10 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:49.496 07:07:10 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:26:49.496 07:07:10 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:49.496 07:07:10 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:49.496 07:07:10 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:49.496 07:07:10 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:49.496 07:07:10 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:49.496 07:07:10 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:49.496 07:07:10 -- common/autotest_common.sh@1330 -- # shift 00:26:49.496 07:07:10 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:49.496 07:07:10 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:49.496 07:07:10 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:49.496 07:07:10 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:49.496 07:07:10 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:49.496 07:07:10 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:49.496 07:07:10 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:49.496 07:07:10 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:49.496 07:07:10 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:49.496 07:07:10 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:49.496 07:07:10 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:49.496 07:07:10 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:49.496 07:07:10 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:49.496 07:07:10 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:49.496 07:07:10 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:26:49.755 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:49.755 fio-3.35 00:26:49.755 Starting 1 thread 00:26:49.755 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.290 00:26:52.290 test: (groupid=0, jobs=1): err= 0: pid=1478409: Sun Dec 15 07:07:13 2024 00:26:52.290 read: IOPS=19.2k, BW=74.9MiB/s (78.5MB/s)(150MiB/2003msec) 00:26:52.290 slat (nsec): min=1333, max=28088, avg=1461.43, stdev=416.99 00:26:52.290 clat (usec): min=1848, max=5958, avg=3318.58, stdev=71.49 00:26:52.290 lat (usec): min=1869, max=5960, avg=3320.04, stdev=71.44 00:26:52.290 clat percentiles (usec): 00:26:52.290 | 1.00th=[ 3294], 5.00th=[ 3294], 10.00th=[ 3294], 20.00th=[ 3294], 00:26:52.290 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3326], 60.00th=[ 3326], 00:26:52.290 | 70.00th=[ 3326], 80.00th=[ 3326], 90.00th=[ 3326], 95.00th=[ 3326], 00:26:52.290 | 99.00th=[ 3392], 99.50th=[ 3458], 99.90th=[ 4228], 99.95th=[ 5080], 00:26:52.290 | 99.99th=[ 5932] 00:26:52.290 bw ( KiB/s): min=75088, max=77496, per=99.98%, avg=76684.00, stdev=1082.27, samples=4 00:26:52.290 iops : min=18772, max=19374, avg=19171.00, stdev=270.57, samples=4 00:26:52.290 write: IOPS=19.2k, BW=74.8MiB/s (78.5MB/s)(150MiB/2003msec); 0 zone resets 00:26:52.290 slat (nsec): min=1367, max=17601, avg=1551.48, stdev=459.27 00:26:52.290 clat (usec): min=2567, max=5970, avg=3317.36, stdev=70.89 00:26:52.290 lat (usec): min=2579, max=5972, avg=3318.91, stdev=70.83 00:26:52.290 clat percentiles (usec): 00:26:52.290 | 1.00th=[ 3294], 5.00th=[ 3294], 10.00th=[ 3294], 20.00th=[ 3294], 00:26:52.290 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3326], 60.00th=[ 3326], 00:26:52.290 | 70.00th=[ 3326], 80.00th=[ 3326], 90.00th=[ 3326], 95.00th=[ 3326], 00:26:52.290 | 99.00th=[ 3392], 99.50th=[ 3490], 99.90th=[ 4228], 99.95th=[ 5080], 00:26:52.290 | 99.99th=[ 5932] 00:26:52.290 bw ( KiB/s): min=74976, max=77400, per=99.99%, avg=76610.00, stdev=1135.11, samples=4 00:26:52.290 iops : min=18744, max=19350, avg=19152.50, stdev=283.78, samples=4 00:26:52.290 lat (msec) : 2=0.01%, 4=99.89%, 10=0.11% 00:26:52.290 cpu : usr=99.45%, sys=0.15%, ctx=16, majf=0, minf=2 00:26:52.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:52.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:52.290 issued rwts: total=38406,38368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:52.290 00:26:52.290 Run status group 0 (all jobs): 00:26:52.290 READ: bw=74.9MiB/s (78.5MB/s), 74.9MiB/s-74.9MiB/s (78.5MB/s-78.5MB/s), io=150MiB (157MB), run=2003-2003msec 00:26:52.290 WRITE: bw=74.8MiB/s (78.5MB/s), 74.8MiB/s-74.8MiB/s (78.5MB/s-78.5MB/s), io=150MiB (157MB), run=2003-2003msec 00:26:52.290 07:07:13 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:52.290 07:07:13 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:52.290 07:07:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:52.290 07:07:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:52.290 07:07:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:52.290 07:07:13 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:52.290 07:07:13 -- common/autotest_common.sh@1330 -- # shift 00:26:52.290 07:07:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:52.290 07:07:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.290 07:07:13 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:52.290 07:07:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:52.290 07:07:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:52.290 07:07:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:52.290 07:07:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:52.290 07:07:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.290 07:07:13 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:26:52.290 07:07:13 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:52.290 07:07:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:52.290 07:07:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:52.290 07:07:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:52.290 07:07:13 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:52.290 07:07:13 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:26:52.548 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:52.548 fio-3.35 00:26:52.548 Starting 1 thread 00:26:52.548 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.078 00:26:55.078 test: (groupid=0, jobs=1): err= 0: pid=1479069: Sun Dec 15 07:07:16 2024 00:26:55.078 read: IOPS=15.3k, BW=238MiB/s (250MB/s)(468MiB/1961msec) 00:26:55.078 slat (nsec): min=2224, max=34758, avg=2534.13, stdev=981.28 00:26:55.078 clat (usec): min=460, max=8422, avg=1649.96, stdev=1364.32 00:26:55.078 lat (usec): min=462, max=8438, avg=1652.49, stdev=1364.61 00:26:55.078 clat percentiles (usec): 00:26:55.078 | 1.00th=[ 644], 5.00th=[ 734], 10.00th=[ 791], 20.00th=[ 865], 00:26:55.078 | 30.00th=[ 930], 40.00th=[ 1012], 50.00th=[ 1123], 60.00th=[ 1237], 00:26:55.078 | 70.00th=[ 1369], 80.00th=[ 1598], 90.00th=[ 4621], 95.00th=[ 4686], 00:26:55.078 | 99.00th=[ 6063], 99.50th=[ 6587], 99.90th=[ 7046], 99.95th=[ 7177], 00:26:55.078 | 99.99th=[ 8356] 00:26:55.078 bw ( KiB/s): min=111136, max=125472, per=48.30%, avg=117944.00, stdev=6031.60, samples=4 00:26:55.078 iops : min= 6944, max= 7842, avg=7371.50, stdev=377.50, samples=4 00:26:55.078 write: IOPS=8829, BW=138MiB/s (145MB/s)(240MiB/1737msec); 0 zone resets 00:26:55.078 slat (usec): min=26, max=112, avg=28.55, stdev= 5.62 00:26:55.078 clat (usec): min=3824, max=19108, avg=11758.68, stdev=1730.50 00:26:55.078 lat (usec): min=3852, max=19134, avg=11787.23, stdev=1729.99 00:26:55.078 clat percentiles (usec): 00:26:55.078 | 1.00th=[ 6718], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10421], 00:26:55.078 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125], 00:26:55.078 | 70.00th=[12649], 80.00th=[13173], 90.00th=[13829], 95.00th=[14615], 00:26:55.078 | 99.00th=[15926], 99.50th=[16319], 99.90th=[18220], 99.95th=[18744], 00:26:55.078 | 99.99th=[19006] 00:26:55.078 bw ( KiB/s): min=118336, max=130848, per=86.85%, avg=122696.00, stdev=5557.96, samples=4 00:26:55.078 iops : min= 7396, max= 8178, avg=7668.50, stdev=347.37, samples=4 00:26:55.078 lat (usec) : 500=0.01%, 750=4.27%, 1000=21.49% 00:26:55.078 lat (msec) : 2=29.66%, 4=1.77%, 10=13.73%, 20=29.08% 00:26:55.078 cpu : usr=96.56%, sys=1.60%, ctx=205, majf=0, minf=1 00:26:55.078 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:55.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:55.078 issued rwts: total=29930,15337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.078 00:26:55.078 Run status group 0 (all jobs): 00:26:55.078 READ: bw=238MiB/s (250MB/s), 238MiB/s-238MiB/s (250MB/s-250MB/s), io=468MiB (490MB), run=1961-1961msec 00:26:55.078 WRITE: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=240MiB (251MB), run=1737-1737msec 00:26:55.078 07:07:16 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:55.078 07:07:16 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:26:55.078 07:07:16 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:26:55.078 07:07:16 -- host/fio.sh@51 -- # get_nvme_bdfs 00:26:55.078 07:07:16 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:55.078 07:07:16 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:55.078 07:07:16 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:55.078 07:07:16 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:55.078 07:07:16 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:55.078 07:07:16 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:26:55.078 07:07:16 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:26:55.078 07:07:16 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:26:58.360 Nvme0n1 00:26:58.360 07:07:19 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:03.623 07:07:25 -- host/fio.sh@53 -- # ls_guid=98f0c682-0a4e-49a6-a997-e1e3bb6f55f5 00:27:03.623 07:07:25 -- host/fio.sh@54 -- # get_lvs_free_mb 98f0c682-0a4e-49a6-a997-e1e3bb6f55f5 00:27:03.623 07:07:25 -- common/autotest_common.sh@1353 -- # local lvs_uuid=98f0c682-0a4e-49a6-a997-e1e3bb6f55f5 00:27:03.623 07:07:25 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:03.623 07:07:25 -- common/autotest_common.sh@1355 -- # local fc 00:27:03.623 07:07:25 -- common/autotest_common.sh@1356 -- # local cs 00:27:03.623 07:07:25 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:03.882 07:07:25 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:03.882 { 00:27:03.882 "uuid": "98f0c682-0a4e-49a6-a997-e1e3bb6f55f5", 00:27:03.882 "name": "lvs_0", 00:27:03.882 "base_bdev": "Nvme0n1", 00:27:03.882 "total_data_clusters": 1862, 00:27:03.882 "free_clusters": 1862, 00:27:03.882 "block_size": 512, 00:27:03.882 "cluster_size": 1073741824 00:27:03.882 } 00:27:03.882 ]' 00:27:03.882 07:07:25 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="98f0c682-0a4e-49a6-a997-e1e3bb6f55f5") .free_clusters' 00:27:03.882 07:07:25 -- common/autotest_common.sh@1358 -- # fc=1862 00:27:03.882 07:07:25 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="98f0c682-0a4e-49a6-a997-e1e3bb6f55f5") .cluster_size' 00:27:03.882 07:07:25 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:27:03.882 07:07:25 -- common/autotest_common.sh@1362 -- # free_mb=1906688 00:27:03.882 07:07:25 -- common/autotest_common.sh@1363 -- # echo 1906688 00:27:03.882 1906688 00:27:03.882 07:07:25 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:27:04.446 fe0e3297-0f53-4043-93f3-19ef6d107f69 00:27:04.446 07:07:25 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:04.704 07:07:26 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:04.704 07:07:26 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:27:04.962 07:07:26 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:04.962 07:07:26 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:04.962 07:07:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:04.962 07:07:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:04.962 07:07:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:04.962 07:07:26 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:04.962 07:07:26 -- common/autotest_common.sh@1330 -- # shift 00:27:04.962 07:07:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:04.962 07:07:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:04.962 07:07:26 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:04.962 07:07:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:04.962 07:07:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:04.962 07:07:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:04.962 07:07:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:04.962 07:07:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:04.962 07:07:26 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:04.962 07:07:26 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:04.962 07:07:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:04.962 07:07:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:04.962 07:07:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:04.962 07:07:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:04.962 07:07:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:05.220 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:05.220 fio-3.35 00:27:05.220 Starting 1 thread 00:27:05.220 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.748 00:27:07.748 test: (groupid=0, jobs=1): err= 0: pid=1481389: Sun Dec 15 07:07:29 2024 00:27:07.748 read: IOPS=10.2k, BW=39.8MiB/s (41.8MB/s)(79.9MiB/2004msec) 00:27:07.748 slat (nsec): min=1330, max=17054, avg=1434.75, stdev=246.92 00:27:07.748 clat (usec): min=190, max=335971, avg=6218.58, stdev=18588.07 00:27:07.748 lat (usec): min=192, max=335974, avg=6220.01, stdev=18588.10 00:27:07.748 clat percentiles (msec): 00:27:07.748 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:07.748 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:07.748 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:07.748 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 338], 99.95th=[ 338], 00:27:07.748 | 99.99th=[ 338] 00:27:07.748 bw ( KiB/s): min=14640, max=49592, per=99.86%, avg=40748.00, stdev=17406.42, samples=4 00:27:07.748 iops : min= 3660, max=12398, avg=10187.00, stdev=4351.60, samples=4 00:27:07.748 write: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(79.9MiB/2004msec); 0 zone resets 00:27:07.748 slat (nsec): min=1368, max=17708, avg=1535.25, stdev=238.41 00:27:07.748 clat (usec): min=175, max=336297, avg=6188.40, stdev=18071.96 00:27:07.748 lat (usec): min=177, max=336300, avg=6189.93, stdev=18072.02 00:27:07.748 clat percentiles (msec): 00:27:07.748 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:07.748 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:07.748 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:07.748 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 338], 99.95th=[ 338], 00:27:07.748 | 99.99th=[ 338] 00:27:07.748 bw ( KiB/s): min=15320, max=49400, per=99.92%, avg=40794.00, stdev=16982.86, samples=4 00:27:07.748 iops : min= 3830, max=12350, avg=10198.50, stdev=4245.72, samples=4 00:27:07.748 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:27:07.748 lat (msec) : 2=0.05%, 4=0.24%, 10=99.36%, 500=0.31% 00:27:07.748 cpu : usr=99.60%, sys=0.05%, ctx=16, majf=0, minf=2 00:27:07.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:07.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:07.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:07.748 issued rwts: total=20443,20455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:07.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:07.748 00:27:07.748 Run status group 0 (all jobs): 00:27:07.748 READ: bw=39.8MiB/s (41.8MB/s), 39.8MiB/s-39.8MiB/s (41.8MB/s-41.8MB/s), io=79.9MiB (83.7MB), run=2004-2004msec 00:27:07.748 WRITE: bw=39.9MiB/s (41.8MB/s), 39.9MiB/s-39.9MiB/s (41.8MB/s-41.8MB/s), io=79.9MiB (83.8MB), run=2004-2004msec 00:27:07.748 07:07:29 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:08.006 07:07:29 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:09.379 07:07:30 -- host/fio.sh@64 -- # ls_nested_guid=fb3567ab-4e97-4319-8638-02b5b2b8ec3e 00:27:09.379 07:07:30 -- host/fio.sh@65 -- # get_lvs_free_mb fb3567ab-4e97-4319-8638-02b5b2b8ec3e 00:27:09.379 07:07:30 -- common/autotest_common.sh@1353 -- # local lvs_uuid=fb3567ab-4e97-4319-8638-02b5b2b8ec3e 00:27:09.379 07:07:30 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:09.379 07:07:30 -- common/autotest_common.sh@1355 -- # local fc 00:27:09.379 07:07:30 -- common/autotest_common.sh@1356 -- # local cs 00:27:09.379 07:07:30 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:09.379 07:07:30 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:09.379 { 00:27:09.379 "uuid": "98f0c682-0a4e-49a6-a997-e1e3bb6f55f5", 00:27:09.379 "name": "lvs_0", 00:27:09.379 "base_bdev": "Nvme0n1", 00:27:09.379 "total_data_clusters": 1862, 00:27:09.379 "free_clusters": 0, 00:27:09.379 "block_size": 512, 00:27:09.379 "cluster_size": 1073741824 00:27:09.379 }, 00:27:09.379 { 00:27:09.379 "uuid": "fb3567ab-4e97-4319-8638-02b5b2b8ec3e", 00:27:09.379 "name": "lvs_n_0", 00:27:09.379 "base_bdev": "fe0e3297-0f53-4043-93f3-19ef6d107f69", 00:27:09.379 "total_data_clusters": 476206, 00:27:09.379 "free_clusters": 476206, 00:27:09.379 "block_size": 512, 00:27:09.379 "cluster_size": 4194304 00:27:09.379 } 00:27:09.379 ]' 00:27:09.379 07:07:30 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="fb3567ab-4e97-4319-8638-02b5b2b8ec3e") .free_clusters' 00:27:09.379 07:07:30 -- common/autotest_common.sh@1358 -- # fc=476206 00:27:09.379 07:07:30 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="fb3567ab-4e97-4319-8638-02b5b2b8ec3e") .cluster_size' 00:27:09.379 07:07:30 -- common/autotest_common.sh@1359 -- # cs=4194304 00:27:09.379 07:07:30 -- common/autotest_common.sh@1362 -- # free_mb=1904824 00:27:09.379 07:07:30 -- common/autotest_common.sh@1363 -- # echo 1904824 00:27:09.379 1904824 00:27:09.379 07:07:30 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:27:10.313 022fa459-6d71-4826-bb88-54d56f14a6c9 00:27:10.313 07:07:31 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:10.571 07:07:31 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:10.571 07:07:32 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:27:10.830 07:07:32 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:10.830 07:07:32 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:10.830 07:07:32 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:10.830 07:07:32 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:10.830 07:07:32 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:10.830 07:07:32 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:10.830 07:07:32 -- common/autotest_common.sh@1330 -- # shift 00:27:10.830 07:07:32 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:10.830 07:07:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.830 07:07:32 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:10.830 07:07:32 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:10.830 07:07:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:10.830 07:07:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:10.830 07:07:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:10.830 07:07:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.830 07:07:32 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:10.830 07:07:32 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:10.830 07:07:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:10.830 07:07:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:10.830 07:07:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:10.830 07:07:32 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:10.830 07:07:32 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:11.088 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:11.088 fio-3.35 00:27:11.088 Starting 1 thread 00:27:11.346 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.875 00:27:13.875 test: (groupid=0, jobs=1): err= 0: pid=1482444: Sun Dec 15 07:07:35 2024 00:27:13.875 read: IOPS=10.7k, BW=41.9MiB/s (43.9MB/s)(84.0MiB/2005msec) 00:27:13.875 slat (nsec): min=1335, max=17829, avg=1453.33, stdev=265.84 00:27:13.875 clat (usec): min=2982, max=10273, avg=5902.55, stdev=199.81 00:27:13.875 lat (usec): min=2984, max=10274, avg=5904.00, stdev=199.78 00:27:13.875 clat percentiles (usec): 00:27:13.875 | 1.00th=[ 5276], 5.00th=[ 5866], 10.00th=[ 5866], 20.00th=[ 5866], 00:27:13.875 | 30.00th=[ 5866], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 5932], 00:27:13.875 | 70.00th=[ 5932], 80.00th=[ 5932], 90.00th=[ 5932], 95.00th=[ 5997], 00:27:13.875 | 99.00th=[ 6521], 99.50th=[ 6587], 99.90th=[ 8848], 99.95th=[ 9503], 00:27:13.875 | 99.99th=[10290] 00:27:13.875 bw ( KiB/s): min=41408, max=43600, per=99.94%, avg=42894.00, stdev=1017.78, samples=4 00:27:13.875 iops : min=10352, max=10900, avg=10723.50, stdev=254.45, samples=4 00:27:13.875 write: IOPS=10.7k, BW=41.8MiB/s (43.9MB/s)(83.9MiB/2005msec); 0 zone resets 00:27:13.875 slat (nsec): min=1368, max=17633, avg=1560.35, stdev=287.59 00:27:13.875 clat (usec): min=2985, max=10297, avg=5920.05, stdev=176.49 00:27:13.875 lat (usec): min=2988, max=10299, avg=5921.61, stdev=176.46 00:27:13.875 clat percentiles (usec): 00:27:13.875 | 1.00th=[ 5276], 5.00th=[ 5866], 10.00th=[ 5866], 20.00th=[ 5866], 00:27:13.875 | 30.00th=[ 5932], 40.00th=[ 5932], 50.00th=[ 5932], 60.00th=[ 5932], 00:27:13.875 | 70.00th=[ 5932], 80.00th=[ 5932], 90.00th=[ 5997], 95.00th=[ 5997], 00:27:13.875 | 99.00th=[ 6521], 99.50th=[ 6587], 99.90th=[ 8029], 99.95th=[ 9503], 00:27:13.875 | 99.99th=[10290] 00:27:13.875 bw ( KiB/s): min=41792, max=43344, per=99.97%, avg=42836.00, stdev=706.22, samples=4 00:27:13.875 iops : min=10448, max=10836, avg=10709.00, stdev=176.56, samples=4 00:27:13.875 lat (msec) : 4=0.04%, 10=99.93%, 20=0.03% 00:27:13.875 cpu : usr=99.50%, sys=0.15%, ctx=15, majf=0, minf=2 00:27:13.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:13.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:13.875 issued rwts: total=21514,21478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.875 00:27:13.875 Run status group 0 (all jobs): 00:27:13.875 READ: bw=41.9MiB/s (43.9MB/s), 41.9MiB/s-41.9MiB/s (43.9MB/s-43.9MB/s), io=84.0MiB (88.1MB), run=2005-2005msec 00:27:13.875 WRITE: bw=41.8MiB/s (43.9MB/s), 41.8MiB/s-41.8MiB/s (43.9MB/s-43.9MB/s), io=83.9MiB (88.0MB), run=2005-2005msec 00:27:13.875 07:07:35 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:13.875 07:07:35 -- host/fio.sh@74 -- # sync 00:27:13.875 07:07:35 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:22.057 07:07:42 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:22.057 07:07:42 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:27.320 07:07:48 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:27.320 07:07:48 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:30.604 07:07:51 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:30.604 07:07:51 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:30.604 07:07:51 -- host/fio.sh@86 -- # nvmftestfini 00:27:30.604 07:07:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:30.604 07:07:51 -- nvmf/common.sh@116 -- # sync 00:27:30.604 07:07:51 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:30.604 07:07:51 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:30.604 07:07:51 -- nvmf/common.sh@119 -- # set +e 00:27:30.604 07:07:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:30.604 07:07:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:30.604 rmmod nvme_rdma 00:27:30.604 rmmod nvme_fabrics 00:27:30.604 07:07:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:30.604 07:07:51 -- nvmf/common.sh@123 -- # set -e 00:27:30.604 07:07:51 -- nvmf/common.sh@124 -- # return 0 00:27:30.604 07:07:51 -- nvmf/common.sh@477 -- # '[' -n 1477765 ']' 00:27:30.604 07:07:51 -- nvmf/common.sh@478 -- # killprocess 1477765 00:27:30.604 07:07:51 -- common/autotest_common.sh@936 -- # '[' -z 1477765 ']' 00:27:30.604 07:07:51 -- common/autotest_common.sh@940 -- # kill -0 1477765 00:27:30.604 07:07:51 -- common/autotest_common.sh@941 -- # uname 00:27:30.604 07:07:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:30.604 07:07:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1477765 00:27:30.604 07:07:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:30.604 07:07:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:30.604 07:07:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1477765' 00:27:30.604 killing process with pid 1477765 00:27:30.604 07:07:51 -- common/autotest_common.sh@955 -- # kill 1477765 00:27:30.604 07:07:51 -- common/autotest_common.sh@960 -- # wait 1477765 00:27:30.604 07:07:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:30.604 07:07:52 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:30.604 00:27:30.604 real 0m50.034s 00:27:30.604 user 3m37.745s 00:27:30.604 sys 0m7.455s 00:27:30.604 07:07:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:30.604 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:30.604 ************************************ 00:27:30.604 END TEST nvmf_fio_host 00:27:30.604 ************************************ 00:27:30.604 07:07:52 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:30.604 07:07:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:30.604 07:07:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:30.604 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:30.604 ************************************ 00:27:30.604 START TEST nvmf_failover 00:27:30.604 ************************************ 00:27:30.604 07:07:52 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:30.604 * Looking for test storage... 00:27:30.604 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:30.604 07:07:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:30.604 07:07:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:30.604 07:07:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:30.604 07:07:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:30.604 07:07:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:30.604 07:07:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:30.604 07:07:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:30.604 07:07:52 -- scripts/common.sh@335 -- # IFS=.-: 00:27:30.604 07:07:52 -- scripts/common.sh@335 -- # read -ra ver1 00:27:30.604 07:07:52 -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.604 07:07:52 -- scripts/common.sh@336 -- # read -ra ver2 00:27:30.604 07:07:52 -- scripts/common.sh@337 -- # local 'op=<' 00:27:30.604 07:07:52 -- scripts/common.sh@339 -- # ver1_l=2 00:27:30.604 07:07:52 -- scripts/common.sh@340 -- # ver2_l=1 00:27:30.604 07:07:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:30.604 07:07:52 -- scripts/common.sh@343 -- # case "$op" in 00:27:30.604 07:07:52 -- scripts/common.sh@344 -- # : 1 00:27:30.604 07:07:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:30.604 07:07:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.863 07:07:52 -- scripts/common.sh@364 -- # decimal 1 00:27:30.864 07:07:52 -- scripts/common.sh@352 -- # local d=1 00:27:30.864 07:07:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.864 07:07:52 -- scripts/common.sh@354 -- # echo 1 00:27:30.864 07:07:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:30.864 07:07:52 -- scripts/common.sh@365 -- # decimal 2 00:27:30.864 07:07:52 -- scripts/common.sh@352 -- # local d=2 00:27:30.864 07:07:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.864 07:07:52 -- scripts/common.sh@354 -- # echo 2 00:27:30.864 07:07:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:30.864 07:07:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:30.864 07:07:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:30.864 07:07:52 -- scripts/common.sh@367 -- # return 0 00:27:30.864 07:07:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.864 07:07:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:30.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.864 --rc genhtml_branch_coverage=1 00:27:30.864 --rc genhtml_function_coverage=1 00:27:30.864 --rc genhtml_legend=1 00:27:30.864 --rc geninfo_all_blocks=1 00:27:30.864 --rc geninfo_unexecuted_blocks=1 00:27:30.864 00:27:30.864 ' 00:27:30.864 07:07:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:30.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.864 --rc genhtml_branch_coverage=1 00:27:30.864 --rc genhtml_function_coverage=1 00:27:30.864 --rc genhtml_legend=1 00:27:30.864 --rc geninfo_all_blocks=1 00:27:30.864 --rc geninfo_unexecuted_blocks=1 00:27:30.864 00:27:30.864 ' 00:27:30.864 07:07:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:30.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.864 --rc genhtml_branch_coverage=1 00:27:30.864 --rc genhtml_function_coverage=1 00:27:30.864 --rc genhtml_legend=1 00:27:30.864 --rc geninfo_all_blocks=1 00:27:30.864 --rc geninfo_unexecuted_blocks=1 00:27:30.864 00:27:30.864 ' 00:27:30.864 07:07:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:30.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.864 --rc genhtml_branch_coverage=1 00:27:30.864 --rc genhtml_function_coverage=1 00:27:30.864 --rc genhtml_legend=1 00:27:30.864 --rc geninfo_all_blocks=1 00:27:30.864 --rc geninfo_unexecuted_blocks=1 00:27:30.864 00:27:30.864 ' 00:27:30.864 07:07:52 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.864 07:07:52 -- nvmf/common.sh@7 -- # uname -s 00:27:30.864 07:07:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.864 07:07:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.864 07:07:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.864 07:07:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.864 07:07:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.864 07:07:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.864 07:07:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.864 07:07:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.864 07:07:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.864 07:07:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.864 07:07:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:30.864 07:07:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:30.864 07:07:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.864 07:07:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.864 07:07:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.864 07:07:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:30.864 07:07:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.864 07:07:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.864 07:07:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.864 07:07:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.864 07:07:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.864 07:07:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.864 07:07:52 -- paths/export.sh@5 -- # export PATH 00:27:30.864 07:07:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.864 07:07:52 -- nvmf/common.sh@46 -- # : 0 00:27:30.864 07:07:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:30.864 07:07:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:30.864 07:07:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:30.864 07:07:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.864 07:07:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.864 07:07:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:30.864 07:07:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:30.864 07:07:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:30.864 07:07:52 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:30.864 07:07:52 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:30.864 07:07:52 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:30.864 07:07:52 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:30.864 07:07:52 -- host/failover.sh@18 -- # nvmftestinit 00:27:30.864 07:07:52 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:30.864 07:07:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.864 07:07:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:30.864 07:07:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:30.864 07:07:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:30.864 07:07:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.864 07:07:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.864 07:07:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.864 07:07:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:30.864 07:07:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:30.864 07:07:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:30.864 07:07:52 -- common/autotest_common.sh@10 -- # set +x 00:27:37.431 07:07:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:37.431 07:07:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:37.431 07:07:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:37.431 07:07:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:37.431 07:07:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:37.431 07:07:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:37.431 07:07:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:37.431 07:07:58 -- nvmf/common.sh@294 -- # net_devs=() 00:27:37.431 07:07:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:37.431 07:07:58 -- nvmf/common.sh@295 -- # e810=() 00:27:37.431 07:07:58 -- nvmf/common.sh@295 -- # local -ga e810 00:27:37.431 07:07:58 -- nvmf/common.sh@296 -- # x722=() 00:27:37.431 07:07:58 -- nvmf/common.sh@296 -- # local -ga x722 00:27:37.431 07:07:58 -- nvmf/common.sh@297 -- # mlx=() 00:27:37.431 07:07:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:37.431 07:07:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.431 07:07:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.431 07:07:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.431 07:07:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.431 07:07:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.431 07:07:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.431 07:07:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.431 07:07:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.431 07:07:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.431 07:07:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.431 07:07:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.431 07:07:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:37.431 07:07:58 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:37.431 07:07:58 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:37.431 07:07:58 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:37.431 07:07:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:37.431 07:07:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:37.431 07:07:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:37.431 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:37.431 07:07:58 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:37.431 07:07:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:37.431 07:07:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:37.431 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:37.431 07:07:58 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:37.431 07:07:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:37.431 07:07:58 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:37.431 07:07:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.431 07:07:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:37.431 07:07:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.431 07:07:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:37.431 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:37.431 07:07:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.431 07:07:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:37.431 07:07:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.431 07:07:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:37.431 07:07:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.431 07:07:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:37.431 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:37.431 07:07:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.431 07:07:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:37.431 07:07:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:37.431 07:07:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:37.431 07:07:58 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:37.431 07:07:58 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:37.431 07:07:58 -- nvmf/common.sh@57 -- # uname 00:27:37.431 07:07:58 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:37.431 07:07:58 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:37.431 07:07:58 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:37.431 07:07:58 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:37.431 07:07:58 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:37.431 07:07:58 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:37.431 07:07:58 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:37.431 07:07:58 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:37.431 07:07:58 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:37.431 07:07:58 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:37.431 07:07:58 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:37.431 07:07:58 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:37.431 07:07:58 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:37.431 07:07:58 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:37.431 07:07:58 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:37.432 07:07:58 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:37.432 07:07:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:37.432 07:07:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.432 07:07:58 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:37.432 07:07:58 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:37.432 07:07:58 -- nvmf/common.sh@104 -- # continue 2 00:27:37.432 07:07:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:37.432 07:07:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.432 07:07:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:37.432 07:07:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.432 07:07:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:37.432 07:07:58 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:37.432 07:07:58 -- nvmf/common.sh@104 -- # continue 2 00:27:37.432 07:07:58 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:37.432 07:07:58 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:37.432 07:07:58 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:37.432 07:07:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:37.432 07:07:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:37.432 07:07:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:37.432 07:07:58 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:37.432 07:07:58 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:37.432 07:07:58 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:37.432 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:37.432 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:37.432 altname enp217s0f0np0 00:27:37.432 altname ens818f0np0 00:27:37.432 inet 192.168.100.8/24 scope global mlx_0_0 00:27:37.432 valid_lft forever preferred_lft forever 00:27:37.432 07:07:58 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:37.432 07:07:58 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:37.432 07:07:58 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:37.432 07:07:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:37.432 07:07:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:37.432 07:07:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:37.432 07:07:58 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:37.432 07:07:58 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:37.432 07:07:58 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:37.432 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:37.432 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:37.432 altname enp217s0f1np1 00:27:37.432 altname ens818f1np1 00:27:37.432 inet 192.168.100.9/24 scope global mlx_0_1 00:27:37.432 valid_lft forever preferred_lft forever 00:27:37.432 07:07:58 -- nvmf/common.sh@410 -- # return 0 00:27:37.432 07:07:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:37.432 07:07:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:37.432 07:07:58 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:37.432 07:07:58 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:37.432 07:07:58 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:37.432 07:07:58 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:37.432 07:07:58 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:37.432 07:07:58 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:37.432 07:07:58 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:37.432 07:07:58 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:37.432 07:07:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:37.432 07:07:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.432 07:07:58 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:37.432 07:07:58 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:37.432 07:07:58 -- nvmf/common.sh@104 -- # continue 2 00:27:37.432 07:07:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:37.432 07:07:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.432 07:07:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:37.432 07:07:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.432 07:07:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:37.432 07:07:58 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:37.432 07:07:58 -- nvmf/common.sh@104 -- # continue 2 00:27:37.432 07:07:58 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:37.432 07:07:58 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:37.432 07:07:58 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:37.432 07:07:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:37.432 07:07:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:37.432 07:07:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:37.432 07:07:58 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:37.432 07:07:58 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:37.432 07:07:58 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:37.432 07:07:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:37.432 07:07:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:37.432 07:07:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:37.432 07:07:58 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:37.432 192.168.100.9' 00:27:37.432 07:07:58 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:37.432 192.168.100.9' 00:27:37.432 07:07:58 -- nvmf/common.sh@445 -- # head -n 1 00:27:37.432 07:07:58 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:37.432 07:07:58 -- nvmf/common.sh@446 -- # tail -n +2 00:27:37.432 07:07:58 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:37.432 192.168.100.9' 00:27:37.432 07:07:58 -- nvmf/common.sh@446 -- # head -n 1 00:27:37.432 07:07:58 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:37.432 07:07:58 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:37.432 07:07:58 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:37.432 07:07:58 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:37.432 07:07:58 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:37.432 07:07:58 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:37.432 07:07:59 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:37.432 07:07:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:37.432 07:07:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:37.432 07:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:37.432 07:07:59 -- nvmf/common.sh@469 -- # nvmfpid=1489009 00:27:37.432 07:07:59 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:37.432 07:07:59 -- nvmf/common.sh@470 -- # waitforlisten 1489009 00:27:37.432 07:07:59 -- common/autotest_common.sh@829 -- # '[' -z 1489009 ']' 00:27:37.432 07:07:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.432 07:07:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:37.432 07:07:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.432 07:07:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:37.432 07:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:37.432 [2024-12-15 07:07:59.068687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:37.432 [2024-12-15 07:07:59.068736] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.692 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.692 [2024-12-15 07:07:59.140364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:37.692 [2024-12-15 07:07:59.177640] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:37.692 [2024-12-15 07:07:59.177750] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.692 [2024-12-15 07:07:59.177759] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.692 [2024-12-15 07:07:59.177768] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.692 [2024-12-15 07:07:59.177870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.692 [2024-12-15 07:07:59.177954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.692 [2024-12-15 07:07:59.177956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.259 07:07:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:38.259 07:07:59 -- common/autotest_common.sh@862 -- # return 0 00:27:38.259 07:07:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:38.259 07:07:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:38.259 07:07:59 -- common/autotest_common.sh@10 -- # set +x 00:27:38.519 07:07:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.519 07:07:59 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:38.519 [2024-12-15 07:08:00.126308] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x999900/0x99ddb0) succeed. 00:27:38.519 [2024-12-15 07:08:00.135376] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x99ae00/0x9df450) succeed. 00:27:38.778 07:08:00 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:39.037 Malloc0 00:27:39.037 07:08:00 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:39.037 07:08:00 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:39.295 07:08:00 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:39.554 [2024-12-15 07:08:01.005454] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:39.554 07:08:01 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:39.554 [2024-12-15 07:08:01.177782] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:39.813 07:08:01 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:39.813 [2024-12-15 07:08:01.350389] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:39.813 07:08:01 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:39.813 07:08:01 -- host/failover.sh@31 -- # bdevperf_pid=1489318 00:27:39.813 07:08:01 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:39.813 07:08:01 -- host/failover.sh@34 -- # waitforlisten 1489318 /var/tmp/bdevperf.sock 00:27:39.813 07:08:01 -- common/autotest_common.sh@829 -- # '[' -z 1489318 ']' 00:27:39.813 07:08:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:39.813 07:08:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:39.813 07:08:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:39.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:39.813 07:08:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:39.813 07:08:01 -- common/autotest_common.sh@10 -- # set +x 00:27:40.748 07:08:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:40.748 07:08:02 -- common/autotest_common.sh@862 -- # return 0 00:27:40.748 07:08:02 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:41.007 NVMe0n1 00:27:41.007 07:08:02 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:41.266 00:27:41.266 07:08:02 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:41.266 07:08:02 -- host/failover.sh@39 -- # run_test_pid=1489588 00:27:41.266 07:08:02 -- host/failover.sh@41 -- # sleep 1 00:27:42.202 07:08:03 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:42.461 07:08:03 -- host/failover.sh@45 -- # sleep 3 00:27:45.748 07:08:06 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:45.748 00:27:45.748 07:08:07 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:46.007 07:08:07 -- host/failover.sh@50 -- # sleep 3 00:27:49.296 07:08:10 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:49.296 [2024-12-15 07:08:10.558989] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:49.296 07:08:10 -- host/failover.sh@55 -- # sleep 1 00:27:50.233 07:08:11 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:50.233 07:08:11 -- host/failover.sh@59 -- # wait 1489588 00:27:56.811 0 00:27:56.811 07:08:17 -- host/failover.sh@61 -- # killprocess 1489318 00:27:56.811 07:08:17 -- common/autotest_common.sh@936 -- # '[' -z 1489318 ']' 00:27:56.811 07:08:17 -- common/autotest_common.sh@940 -- # kill -0 1489318 00:27:56.811 07:08:17 -- common/autotest_common.sh@941 -- # uname 00:27:56.811 07:08:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:56.811 07:08:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1489318 00:27:56.811 07:08:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:56.811 07:08:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:56.811 07:08:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1489318' 00:27:56.811 killing process with pid 1489318 00:27:56.811 07:08:17 -- common/autotest_common.sh@955 -- # kill 1489318 00:27:56.811 07:08:17 -- common/autotest_common.sh@960 -- # wait 1489318 00:27:56.811 07:08:18 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:56.811 [2024-12-15 07:08:01.403407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:56.811 [2024-12-15 07:08:01.403469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489318 ] 00:27:56.811 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.811 [2024-12-15 07:08:01.476883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.811 [2024-12-15 07:08:01.513721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.811 Running I/O for 15 seconds... 00:27:56.811 [2024-12-15 07:08:04.941247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.811 [2024-12-15 07:08:04.941291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27868 cdw0:379265b0 sqhd:79c4 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.941304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.811 [2024-12-15 07:08:04.941314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27868 cdw0:379265b0 sqhd:79c4 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.941323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.811 [2024-12-15 07:08:04.941333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27868 cdw0:379265b0 sqhd:79c4 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.941342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.811 [2024-12-15 07:08:04.941351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27868 cdw0:379265b0 sqhd:79c4 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:56.811 [2024-12-15 07:08:04.943131] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.811 [2024-12-15 07:08:04.943147] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:27:56.811 [2024-12-15 07:08:04.943158] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:56.811 [2024-12-15 07:08:04.943177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x183d00 00:27:56.811 [2024-12-15 07:08:04.943188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.811 [2024-12-15 07:08:04.943232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181300 00:27:56.811 [2024-12-15 07:08:04.943258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.811 [2024-12-15 07:08:04.943301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x183d00 00:27:56.811 [2024-12-15 07:08:04.943347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x183d00 00:27:56.811 [2024-12-15 07:08:04.943388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181300 00:27:56.811 [2024-12-15 07:08:04.943427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x181300 00:27:56.811 [2024-12-15 07:08:04.943467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.811 [2024-12-15 07:08:04.943493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181300 00:27:56.811 [2024-12-15 07:08:04.943532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.811 [2024-12-15 07:08:04.943557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181300 00:27:56.811 [2024-12-15 07:08:04.943583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.811 [2024-12-15 07:08:04.943599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.943608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.943638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.812 [2024-12-15 07:08:04.943648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.943663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.943673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.943703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.943712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.943730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.943739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.943769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.812 [2024-12-15 07:08:04.943779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.943794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.943804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.943834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x183d00 00:27:56.812 [2024-12-15 07:08:04.943843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.943859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.943868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.943899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.812 [2024-12-15 07:08:04.943909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.943924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.812 [2024-12-15 07:08:04.943933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.943963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.943973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.943994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.944005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x183d00 00:27:56.812 [2024-12-15 07:08:04.944046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.812 [2024-12-15 07:08:04.944085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x183d00 00:27:56.812 [2024-12-15 07:08:04.944129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.812 [2024-12-15 07:08:04.944167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.944193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.944218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.812 [2024-12-15 07:08:04.944244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x183d00 00:27:56.812 [2024-12-15 07:08:04.944283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x183d00 00:27:56.812 [2024-12-15 07:08:04.944309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x183d00 00:27:56.812 [2024-12-15 07:08:04.944334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x183d00 00:27:56.812 [2024-12-15 07:08:04.944377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.812 [2024-12-15 07:08:04.944402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.812 [2024-12-15 07:08:04.944427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.944452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x183d00 00:27:56.812 [2024-12-15 07:08:04.944494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.944519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.812 [2024-12-15 07:08:04.944545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x183d00 00:27:56.812 [2024-12-15 07:08:04.944569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a0e80 len:0x1000 key:0x183d00 00:27:56.812 [2024-12-15 07:08:04.944609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.812 [2024-12-15 07:08:04.944634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.944659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.812 [2024-12-15 07:08:04.944698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.944724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.812 [2024-12-15 07:08:04.944754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x181300 00:27:56.812 [2024-12-15 07:08:04.944763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.944793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.944803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.944819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.944830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.944862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.944871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.944901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181300 00:27:56.813 [2024-12-15 07:08:04.944911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.944941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.944951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.944986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.944996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.945035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.945076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.945102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.945142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x181300 00:27:56.813 [2024-12-15 07:08:04.945182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181300 00:27:56.813 [2024-12-15 07:08:04.945221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.945261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.945301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181300 00:27:56.813 [2024-12-15 07:08:04.945327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.945352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.945391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x181300 00:27:56.813 [2024-12-15 07:08:04.945416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.945457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181300 00:27:56.813 [2024-12-15 07:08:04.945484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181300 00:27:56.813 [2024-12-15 07:08:04.945509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.945534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.945560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.945585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181300 00:27:56.813 [2024-12-15 07:08:04.945609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.945640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.945665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.945705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.945730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.945771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181300 00:27:56.813 [2024-12-15 07:08:04.945810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.945838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013879b80 len:0x1000 key:0x183d00 00:27:56.813 [2024-12-15 07:08:04.945877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181300 00:27:56.813 [2024-12-15 07:08:04.945917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.945943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.945974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181300 00:27:56.813 [2024-12-15 07:08:04.945989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.946005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.946016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.813 [2024-12-15 07:08:04.946046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.813 [2024-12-15 07:08:04.946056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.946095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.946121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x181300 00:27:56.814 [2024-12-15 07:08:04.946160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181300 00:27:56.814 [2024-12-15 07:08:04.946185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.946210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.946249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181300 00:27:56.814 [2024-12-15 07:08:04.946275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181300 00:27:56.814 [2024-12-15 07:08:04.946315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.946340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.946365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.946406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.946431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.946471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x181300 00:27:56.814 [2024-12-15 07:08:04.946510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x181300 00:27:56.814 [2024-12-15 07:08:04.946551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.946591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.946630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.946656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181300 00:27:56.814 [2024-12-15 07:08:04.946695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.946720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e2e80 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.946745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.946786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.946825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.946864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.946889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.946928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.946953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.946986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.946996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.947027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.947038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.947067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.947077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.947092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181300 00:27:56.814 [2024-12-15 07:08:04.947102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.947118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x183d00 00:27:56.814 [2024-12-15 07:08:04.947127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.947156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.947166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.947183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.947193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.947223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.947233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.947263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x181300 00:27:56.814 [2024-12-15 07:08:04.947273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.947289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.947298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.814 [2024-12-15 07:08:04.947313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.814 [2024-12-15 07:08:04.947323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:04.947354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x183d00 00:27:56.815 [2024-12-15 07:08:04.947363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:04.947379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x181300 00:27:56.815 [2024-12-15 07:08:04.947388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27868 cdw0:1111a000 sqhd:0c56 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:04.961929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:56.815 [2024-12-15 07:08:04.961950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:56.815 [2024-12-15 07:08:04.961960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90488 len:8 PRP1 0x0 PRP2 0x0 00:27:56.815 [2024-12-15 07:08:04.961970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:04.962042] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:27:56.815 [2024-12-15 07:08:04.962053] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.815 [2024-12-15 07:08:04.962081] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.815 [2024-12-15 07:08:04.963757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.815 [2024-12-15 07:08:04.991050] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:56.815 [2024-12-15 07:08:08.377620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.377661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x183b00 00:27:56.815 [2024-12-15 07:08:08.377694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x181300 00:27:56.815 [2024-12-15 07:08:08.377715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.377735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.377754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.377773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181300 00:27:56.815 [2024-12-15 07:08:08.377792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.377811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181300 00:27:56.815 [2024-12-15 07:08:08.377830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.377849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.377868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.377887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.377906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.377926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x183b00 00:27:56.815 [2024-12-15 07:08:08.377945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181300 00:27:56.815 [2024-12-15 07:08:08.377964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181300 00:27:56.815 [2024-12-15 07:08:08.377988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.377999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.378008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.378018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181300 00:27:56.815 [2024-12-15 07:08:08.378027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.378037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x183b00 00:27:56.815 [2024-12-15 07:08:08.378047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.378057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181300 00:27:56.815 [2024-12-15 07:08:08.378066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.378076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.378085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.378096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x183b00 00:27:56.815 [2024-12-15 07:08:08.378106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.378116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x183b00 00:27:56.815 [2024-12-15 07:08:08.378125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.378135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x183b00 00:27:56.815 [2024-12-15 07:08:08.378146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.815 [2024-12-15 07:08:08.378156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.815 [2024-12-15 07:08:08.378165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x181300 00:27:56.816 [2024-12-15 07:08:08.378222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.816 [2024-12-15 07:08:08.378260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.816 [2024-12-15 07:08:08.378279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181300 00:27:56.816 [2024-12-15 07:08:08.378395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181300 00:27:56.816 [2024-12-15 07:08:08.378432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.816 [2024-12-15 07:08:08.378507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.816 [2024-12-15 07:08:08.378545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181300 00:27:56.816 [2024-12-15 07:08:08.378584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013879b80 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013878b00 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013877a80 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.816 [2024-12-15 07:08:08.378681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.816 [2024-12-15 07:08:08.378700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181300 00:27:56.816 [2024-12-15 07:08:08.378719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x181300 00:27:56.816 [2024-12-15 07:08:08.378738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f5780 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x181300 00:27:56.816 [2024-12-15 07:08:08.378776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.816 [2024-12-15 07:08:08.378794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x183b00 00:27:56.816 [2024-12-15 07:08:08.378815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x181300 00:27:56.816 [2024-12-15 07:08:08.378833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.816 [2024-12-15 07:08:08.378852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.816 [2024-12-15 07:08:08.378862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.817 [2024-12-15 07:08:08.378871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.378882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.378890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.378900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.378909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.378920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.378929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.378939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.378948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.378958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.378966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.378980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.378990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.817 [2024-12-15 07:08:08.379086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.817 [2024-12-15 07:08:08.379318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.817 [2024-12-15 07:08:08.379357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e4f80 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.817 [2024-12-15 07:08:08.379432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x181300 00:27:56.817 [2024-12-15 07:08:08.379567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.817 [2024-12-15 07:08:08.379578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0x183b00 00:27:56.817 [2024-12-15 07:08:08.379586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.379605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x181300 00:27:56.818 [2024-12-15 07:08:08.379624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.379643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x181300 00:27:56.818 [2024-12-15 07:08:08.379662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4780 len:0x1000 key:0x183b00 00:27:56.818 [2024-12-15 07:08:08.379682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.379701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.379720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.379738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x183b00 00:27:56.818 [2024-12-15 07:08:08.379757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.379776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ce480 len:0x1000 key:0x183b00 00:27:56.818 [2024-12-15 07:08:08.379795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181300 00:27:56.818 [2024-12-15 07:08:08.379814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.379833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.379852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.379870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.379889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.379909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x183b00 00:27:56.818 [2024-12-15 07:08:08.379928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x183b00 00:27:56.818 [2024-12-15 07:08:08.379947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.379966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x183b00 00:27:56.818 [2024-12-15 07:08:08.379989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.379999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x183b00 00:27:56.818 [2024-12-15 07:08:08.380008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.380018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x183b00 00:27:56.818 [2024-12-15 07:08:08.380027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.380037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x183b00 00:27:56.818 [2024-12-15 07:08:08.380046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.380058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.380067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.380078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:08.380086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.380096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181300 00:27:56.818 [2024-12-15 07:08:08.380105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27870 cdw0:1111a000 sqhd:8074 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.391251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:56.818 [2024-12-15 07:08:08.391268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:56.818 [2024-12-15 07:08:08.391279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56520 len:8 PRP1 0x0 PRP2 0x0 00:27:56.818 [2024-12-15 07:08:08.391297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.391342] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:56.818 [2024-12-15 07:08:08.391356] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:27:56.818 [2024-12-15 07:08:08.391369] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.818 [2024-12-15 07:08:08.391405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.818 [2024-12-15 07:08:08.391420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27870 cdw0:0 sqhd:60a2 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.391432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.818 [2024-12-15 07:08:08.391444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27870 cdw0:0 sqhd:60a2 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.391456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.818 [2024-12-15 07:08:08.391468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27870 cdw0:0 sqhd:60a2 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.391480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.818 [2024-12-15 07:08:08.391492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27870 cdw0:0 sqhd:60a2 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:08.410532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:56.818 [2024-12-15 07:08:08.410553] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:56.818 [2024-12-15 07:08:08.410564] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:56.818 [2024-12-15 07:08:08.412340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.818 [2024-12-15 07:08:08.448433] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:56.818 [2024-12-15 07:08:12.765003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:12.765046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:12.765064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181300 00:27:56.818 [2024-12-15 07:08:12.765074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.818 [2024-12-15 07:08:12.765086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.818 [2024-12-15 07:08:12.765096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.819 [2024-12-15 07:08:12.765116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.819 [2024-12-15 07:08:12.765141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.819 [2024-12-15 07:08:12.765160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.819 [2024-12-15 07:08:12.765199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x183d00 00:27:56.819 [2024-12-15 07:08:12.765219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.819 [2024-12-15 07:08:12.765239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013886180 len:0x1000 key:0x183d00 00:27:56.819 [2024-12-15 07:08:12.765259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.819 [2024-12-15 07:08:12.765317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.819 [2024-12-15 07:08:12.765399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x183d00 00:27:56.819 [2024-12-15 07:08:12.765418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x183d00 00:27:56.819 [2024-12-15 07:08:12.765477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x183d00 00:27:56.819 [2024-12-15 07:08:12.765535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x183d00 00:27:56.819 [2024-12-15 07:08:12.765557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.819 [2024-12-15 07:08:12.765577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x183d00 00:27:56.819 [2024-12-15 07:08:12.765637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.819 [2024-12-15 07:08:12.765656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.819 [2024-12-15 07:08:12.765675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x183d00 00:27:56.819 [2024-12-15 07:08:12.765694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x183d00 00:27:56.819 [2024-12-15 07:08:12.765714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.819 [2024-12-15 07:08:12.765734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x183d00 00:27:56.819 [2024-12-15 07:08:12.765773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x183d00 00:27:56.819 [2024-12-15 07:08:12.765792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.819 [2024-12-15 07:08:12.765811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x183d00 00:27:56.819 [2024-12-15 07:08:12.765832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.819 [2024-12-15 07:08:12.765843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181300 00:27:56.819 [2024-12-15 07:08:12.765852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.765862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181300 00:27:56.820 [2024-12-15 07:08:12.765871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.765882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181300 00:27:56.820 [2024-12-15 07:08:12.765891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.765902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181300 00:27:56.820 [2024-12-15 07:08:12.765911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.765921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.820 [2024-12-15 07:08:12.765930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.765940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181300 00:27:56.820 [2024-12-15 07:08:12.765949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.765960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f5780 len:0x1000 key:0x183d00 00:27:56.820 [2024-12-15 07:08:12.765969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.765984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.820 [2024-12-15 07:08:12.765993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x181300 00:27:56.820 [2024-12-15 07:08:12.766012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.820 [2024-12-15 07:08:12.766032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.820 [2024-12-15 07:08:12.766054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f0500 len:0x1000 key:0x183d00 00:27:56.820 [2024-12-15 07:08:12.766074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x183d00 00:27:56.820 [2024-12-15 07:08:12.766094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x183d00 00:27:56.820 [2024-12-15 07:08:12.766114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.820 [2024-12-15 07:08:12.766134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x183d00 00:27:56.820 [2024-12-15 07:08:12.766153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x183d00 00:27:56.820 [2024-12-15 07:08:12.766174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181300 00:27:56.820 [2024-12-15 07:08:12.766193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x181300 00:27:56.820 [2024-12-15 07:08:12.766212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x183d00 00:27:56.820 [2024-12-15 07:08:12.766232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x181300 00:27:56.820 [2024-12-15 07:08:12.766252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.820 [2024-12-15 07:08:12.766271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x183d00 00:27:56.820 [2024-12-15 07:08:12.766292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181300 00:27:56.820 [2024-12-15 07:08:12.766311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x181300 00:27:56.820 [2024-12-15 07:08:12.766330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.820 [2024-12-15 07:08:12.766350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x181300 00:27:56.820 [2024-12-15 07:08:12.766369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.820 [2024-12-15 07:08:12.766388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181300 00:27:56.820 [2024-12-15 07:08:12.766408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.820 [2024-12-15 07:08:12.766427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.820 [2024-12-15 07:08:12.766446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x183d00 00:27:56.820 [2024-12-15 07:08:12.766466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.820 [2024-12-15 07:08:12.766486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.820 [2024-12-15 07:08:12.766496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388f600 len:0x1000 key:0x183d00 00:27:56.820 [2024-12-15 07:08:12.766506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.766526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.766546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.766565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.766585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.766605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.766624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.821 [2024-12-15 07:08:12.766643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.766664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.766683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.766703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.766722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.766743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.821 [2024-12-15 07:08:12.766762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.766782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.766802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.766821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.766841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.766861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.821 [2024-12-15 07:08:12.766880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.766900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.766920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.821 [2024-12-15 07:08:12.766939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.766960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.766984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.766994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.767003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.821 [2024-12-15 07:08:12.767023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.767043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.767063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.767082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.821 [2024-12-15 07:08:12.767101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.767120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.821 [2024-12-15 07:08:12.767140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.767159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.821 [2024-12-15 07:08:12.767178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.821 [2024-12-15 07:08:12.767201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x183d00 00:27:56.821 [2024-12-15 07:08:12.767220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.821 [2024-12-15 07:08:12.767239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.821 [2024-12-15 07:08:12.767249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x181300 00:27:56.821 [2024-12-15 07:08:12.767258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e2e80 len:0x1000 key:0x183d00 00:27:56.822 [2024-12-15 07:08:12.767278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.822 [2024-12-15 07:08:12.767297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.822 [2024-12-15 07:08:12.767316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x183d00 00:27:56.822 [2024-12-15 07:08:12.767336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.822 [2024-12-15 07:08:12.767355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.822 [2024-12-15 07:08:12.767374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.822 [2024-12-15 07:08:12.767394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181300 00:27:56.822 [2024-12-15 07:08:12.767415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x183d00 00:27:56.822 [2024-12-15 07:08:12.767434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x181300 00:27:56.822 [2024-12-15 07:08:12.767454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.822 [2024-12-15 07:08:12.767473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.822 [2024-12-15 07:08:12.767492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.822 [2024-12-15 07:08:12.767512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.822 [2024-12-15 07:08:12.767530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.767540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.822 [2024-12-15 07:08:12.767549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27872 cdw0:1111a000 sqhd:8788 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.769383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:56.822 [2024-12-15 07:08:12.769396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:56.822 [2024-12-15 07:08:12.769405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94976 len:8 PRP1 0x0 PRP2 0x0 00:27:56.822 [2024-12-15 07:08:12.769414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.822 [2024-12-15 07:08:12.769454] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:56.822 [2024-12-15 07:08:12.769466] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:27:56.822 [2024-12-15 07:08:12.769476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.822 [2024-12-15 07:08:12.771244] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.822 [2024-12-15 07:08:12.785433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:56.822 [2024-12-15 07:08:12.820787] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:56.822 00:27:56.822 Latency(us) 00:27:56.822 [2024-12-15T06:08:18.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.822 [2024-12-15T06:08:18.463Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:56.822 Verification LBA range: start 0x0 length 0x4000 00:27:56.822 NVMe0n1 : 15.00 20194.99 78.89 319.64 0.00 6228.31 353.89 1046898.28 00:27:56.822 [2024-12-15T06:08:18.463Z] =================================================================================================================== 00:27:56.822 [2024-12-15T06:08:18.463Z] Total : 20194.99 78.89 319.64 0.00 6228.31 353.89 1046898.28 00:27:56.822 Received shutdown signal, test time was about 15.000000 seconds 00:27:56.822 00:27:56.822 Latency(us) 00:27:56.822 [2024-12-15T06:08:18.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.822 [2024-12-15T06:08:18.463Z] =================================================================================================================== 00:27:56.822 [2024-12-15T06:08:18.463Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:56.822 07:08:18 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:56.822 07:08:18 -- host/failover.sh@65 -- # count=3 00:27:56.822 07:08:18 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:56.822 07:08:18 -- host/failover.sh@73 -- # bdevperf_pid=1492279 00:27:56.822 07:08:18 -- host/failover.sh@75 -- # waitforlisten 1492279 /var/tmp/bdevperf.sock 00:27:56.822 07:08:18 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:56.822 07:08:18 -- common/autotest_common.sh@829 -- # '[' -z 1492279 ']' 00:27:56.822 07:08:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:56.822 07:08:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.822 07:08:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:56.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:56.822 07:08:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.822 07:08:18 -- common/autotest_common.sh@10 -- # set +x 00:27:57.760 07:08:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.760 07:08:19 -- common/autotest_common.sh@862 -- # return 0 00:27:57.760 07:08:19 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:27:57.760 [2024-12-15 07:08:19.197651] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:57.760 07:08:19 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:27:57.760 [2024-12-15 07:08:19.378302] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:27:58.019 07:08:19 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:58.278 NVMe0n1 00:27:58.278 07:08:19 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:58.278 00:27:58.536 07:08:19 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:58.536 00:27:58.536 07:08:20 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:58.536 07:08:20 -- host/failover.sh@82 -- # grep -q NVMe0 00:27:58.795 07:08:20 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:59.054 07:08:20 -- host/failover.sh@87 -- # sleep 3 00:28:02.341 07:08:23 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:02.341 07:08:23 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:02.341 07:08:23 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:02.341 07:08:23 -- host/failover.sh@90 -- # run_test_pid=1493112 00:28:02.341 07:08:23 -- host/failover.sh@92 -- # wait 1493112 00:28:03.277 0 00:28:03.277 07:08:24 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:03.277 [2024-12-15 07:08:18.211055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:03.277 [2024-12-15 07:08:18.211111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492279 ] 00:28:03.277 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.277 [2024-12-15 07:08:18.282115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.277 [2024-12-15 07:08:18.315375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.277 [2024-12-15 07:08:20.481577] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:03.278 [2024-12-15 07:08:20.482189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.278 [2024-12-15 07:08:20.482217] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.278 [2024-12-15 07:08:20.502132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:03.278 [2024-12-15 07:08:20.518328] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:03.278 Running I/O for 1 seconds... 00:28:03.278 00:28:03.278 Latency(us) 00:28:03.278 [2024-12-15T06:08:24.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.278 [2024-12-15T06:08:24.919Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:03.278 Verification LBA range: start 0x0 length 0x4000 00:28:03.278 NVMe0n1 : 1.00 25363.73 99.08 0.00 0.00 5022.90 1205.86 13159.63 00:28:03.278 [2024-12-15T06:08:24.919Z] =================================================================================================================== 00:28:03.278 [2024-12-15T06:08:24.919Z] Total : 25363.73 99.08 0.00 0.00 5022.90 1205.86 13159.63 00:28:03.278 07:08:24 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:03.278 07:08:24 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:03.536 07:08:24 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:03.536 07:08:25 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:03.794 07:08:25 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:03.794 07:08:25 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:04.053 07:08:25 -- host/failover.sh@101 -- # sleep 3 00:28:07.340 07:08:28 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:07.340 07:08:28 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:07.340 07:08:28 -- host/failover.sh@108 -- # killprocess 1492279 00:28:07.340 07:08:28 -- common/autotest_common.sh@936 -- # '[' -z 1492279 ']' 00:28:07.340 07:08:28 -- common/autotest_common.sh@940 -- # kill -0 1492279 00:28:07.340 07:08:28 -- common/autotest_common.sh@941 -- # uname 00:28:07.340 07:08:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:07.340 07:08:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1492279 00:28:07.340 07:08:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:07.340 07:08:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:07.340 07:08:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1492279' 00:28:07.340 killing process with pid 1492279 00:28:07.340 07:08:28 -- common/autotest_common.sh@955 -- # kill 1492279 00:28:07.340 07:08:28 -- common/autotest_common.sh@960 -- # wait 1492279 00:28:07.599 07:08:28 -- host/failover.sh@110 -- # sync 00:28:07.599 07:08:28 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:07.599 07:08:29 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:07.599 07:08:29 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:07.599 07:08:29 -- host/failover.sh@116 -- # nvmftestfini 00:28:07.599 07:08:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:07.599 07:08:29 -- nvmf/common.sh@116 -- # sync 00:28:07.599 07:08:29 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:07.599 07:08:29 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:07.599 07:08:29 -- nvmf/common.sh@119 -- # set +e 00:28:07.599 07:08:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:07.600 07:08:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:07.600 rmmod nvme_rdma 00:28:07.600 rmmod nvme_fabrics 00:28:07.600 07:08:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:07.600 07:08:29 -- nvmf/common.sh@123 -- # set -e 00:28:07.600 07:08:29 -- nvmf/common.sh@124 -- # return 0 00:28:07.600 07:08:29 -- nvmf/common.sh@477 -- # '[' -n 1489009 ']' 00:28:07.600 07:08:29 -- nvmf/common.sh@478 -- # killprocess 1489009 00:28:07.600 07:08:29 -- common/autotest_common.sh@936 -- # '[' -z 1489009 ']' 00:28:07.600 07:08:29 -- common/autotest_common.sh@940 -- # kill -0 1489009 00:28:07.600 07:08:29 -- common/autotest_common.sh@941 -- # uname 00:28:07.875 07:08:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:07.875 07:08:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1489009 00:28:07.875 07:08:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:07.875 07:08:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:07.875 07:08:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1489009' 00:28:07.875 killing process with pid 1489009 00:28:07.875 07:08:29 -- common/autotest_common.sh@955 -- # kill 1489009 00:28:07.875 07:08:29 -- common/autotest_common.sh@960 -- # wait 1489009 00:28:08.134 07:08:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:08.134 07:08:29 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:08.134 00:28:08.134 real 0m37.472s 00:28:08.134 user 2m4.368s 00:28:08.134 sys 0m7.486s 00:28:08.134 07:08:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:08.134 07:08:29 -- common/autotest_common.sh@10 -- # set +x 00:28:08.134 ************************************ 00:28:08.134 END TEST nvmf_failover 00:28:08.134 ************************************ 00:28:08.134 07:08:29 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:08.134 07:08:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:08.134 07:08:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:08.134 07:08:29 -- common/autotest_common.sh@10 -- # set +x 00:28:08.134 ************************************ 00:28:08.134 START TEST nvmf_discovery 00:28:08.134 ************************************ 00:28:08.134 07:08:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:08.134 * Looking for test storage... 00:28:08.134 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:08.134 07:08:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:08.134 07:08:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:08.134 07:08:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:08.134 07:08:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:08.134 07:08:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:08.134 07:08:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:08.134 07:08:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:08.134 07:08:29 -- scripts/common.sh@335 -- # IFS=.-: 00:28:08.134 07:08:29 -- scripts/common.sh@335 -- # read -ra ver1 00:28:08.134 07:08:29 -- scripts/common.sh@336 -- # IFS=.-: 00:28:08.134 07:08:29 -- scripts/common.sh@336 -- # read -ra ver2 00:28:08.134 07:08:29 -- scripts/common.sh@337 -- # local 'op=<' 00:28:08.134 07:08:29 -- scripts/common.sh@339 -- # ver1_l=2 00:28:08.134 07:08:29 -- scripts/common.sh@340 -- # ver2_l=1 00:28:08.134 07:08:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:08.134 07:08:29 -- scripts/common.sh@343 -- # case "$op" in 00:28:08.134 07:08:29 -- scripts/common.sh@344 -- # : 1 00:28:08.134 07:08:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:08.134 07:08:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:08.134 07:08:29 -- scripts/common.sh@364 -- # decimal 1 00:28:08.134 07:08:29 -- scripts/common.sh@352 -- # local d=1 00:28:08.134 07:08:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:08.134 07:08:29 -- scripts/common.sh@354 -- # echo 1 00:28:08.134 07:08:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:08.134 07:08:29 -- scripts/common.sh@365 -- # decimal 2 00:28:08.134 07:08:29 -- scripts/common.sh@352 -- # local d=2 00:28:08.134 07:08:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:08.134 07:08:29 -- scripts/common.sh@354 -- # echo 2 00:28:08.134 07:08:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:08.134 07:08:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:08.134 07:08:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:08.134 07:08:29 -- scripts/common.sh@367 -- # return 0 00:28:08.134 07:08:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:08.134 07:08:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:08.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.134 --rc genhtml_branch_coverage=1 00:28:08.134 --rc genhtml_function_coverage=1 00:28:08.134 --rc genhtml_legend=1 00:28:08.134 --rc geninfo_all_blocks=1 00:28:08.134 --rc geninfo_unexecuted_blocks=1 00:28:08.134 00:28:08.134 ' 00:28:08.135 07:08:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:08.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.135 --rc genhtml_branch_coverage=1 00:28:08.135 --rc genhtml_function_coverage=1 00:28:08.135 --rc genhtml_legend=1 00:28:08.135 --rc geninfo_all_blocks=1 00:28:08.135 --rc geninfo_unexecuted_blocks=1 00:28:08.135 00:28:08.135 ' 00:28:08.135 07:08:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:08.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.135 --rc genhtml_branch_coverage=1 00:28:08.135 --rc genhtml_function_coverage=1 00:28:08.135 --rc genhtml_legend=1 00:28:08.135 --rc geninfo_all_blocks=1 00:28:08.135 --rc geninfo_unexecuted_blocks=1 00:28:08.135 00:28:08.135 ' 00:28:08.135 07:08:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:08.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.135 --rc genhtml_branch_coverage=1 00:28:08.135 --rc genhtml_function_coverage=1 00:28:08.135 --rc genhtml_legend=1 00:28:08.135 --rc geninfo_all_blocks=1 00:28:08.135 --rc geninfo_unexecuted_blocks=1 00:28:08.135 00:28:08.135 ' 00:28:08.135 07:08:29 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.135 07:08:29 -- nvmf/common.sh@7 -- # uname -s 00:28:08.135 07:08:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.135 07:08:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.135 07:08:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.135 07:08:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.135 07:08:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.135 07:08:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.135 07:08:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.135 07:08:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.135 07:08:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.135 07:08:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.135 07:08:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:08.135 07:08:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:08.135 07:08:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.135 07:08:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.394 07:08:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.394 07:08:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:08.394 07:08:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.394 07:08:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.394 07:08:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.394 07:08:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.394 07:08:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.394 07:08:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.394 07:08:29 -- paths/export.sh@5 -- # export PATH 00:28:08.394 07:08:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.394 07:08:29 -- nvmf/common.sh@46 -- # : 0 00:28:08.394 07:08:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:08.394 07:08:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:08.394 07:08:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:08.394 07:08:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.394 07:08:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.394 07:08:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:08.394 07:08:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:08.394 07:08:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:08.394 07:08:29 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:28:08.394 07:08:29 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:08.394 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:08.394 07:08:29 -- host/discovery.sh@13 -- # exit 0 00:28:08.394 00:28:08.394 real 0m0.175s 00:28:08.394 user 0m0.095s 00:28:08.394 sys 0m0.093s 00:28:08.394 07:08:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:08.395 07:08:29 -- common/autotest_common.sh@10 -- # set +x 00:28:08.395 ************************************ 00:28:08.395 END TEST nvmf_discovery 00:28:08.395 ************************************ 00:28:08.395 07:08:29 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:08.395 07:08:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:08.395 07:08:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:08.395 07:08:29 -- common/autotest_common.sh@10 -- # set +x 00:28:08.395 ************************************ 00:28:08.395 START TEST nvmf_discovery_remove_ifc 00:28:08.395 ************************************ 00:28:08.395 07:08:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:08.395 * Looking for test storage... 00:28:08.395 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:08.395 07:08:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:08.395 07:08:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:08.395 07:08:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:08.395 07:08:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:08.395 07:08:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:08.395 07:08:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:08.395 07:08:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:08.395 07:08:29 -- scripts/common.sh@335 -- # IFS=.-: 00:28:08.395 07:08:29 -- scripts/common.sh@335 -- # read -ra ver1 00:28:08.395 07:08:29 -- scripts/common.sh@336 -- # IFS=.-: 00:28:08.395 07:08:29 -- scripts/common.sh@336 -- # read -ra ver2 00:28:08.395 07:08:29 -- scripts/common.sh@337 -- # local 'op=<' 00:28:08.395 07:08:29 -- scripts/common.sh@339 -- # ver1_l=2 00:28:08.395 07:08:29 -- scripts/common.sh@340 -- # ver2_l=1 00:28:08.395 07:08:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:08.395 07:08:29 -- scripts/common.sh@343 -- # case "$op" in 00:28:08.395 07:08:29 -- scripts/common.sh@344 -- # : 1 00:28:08.395 07:08:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:08.395 07:08:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:08.395 07:08:29 -- scripts/common.sh@364 -- # decimal 1 00:28:08.395 07:08:29 -- scripts/common.sh@352 -- # local d=1 00:28:08.395 07:08:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:08.395 07:08:29 -- scripts/common.sh@354 -- # echo 1 00:28:08.395 07:08:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:08.395 07:08:30 -- scripts/common.sh@365 -- # decimal 2 00:28:08.395 07:08:30 -- scripts/common.sh@352 -- # local d=2 00:28:08.395 07:08:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:08.395 07:08:30 -- scripts/common.sh@354 -- # echo 2 00:28:08.395 07:08:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:08.395 07:08:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:08.395 07:08:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:08.395 07:08:30 -- scripts/common.sh@367 -- # return 0 00:28:08.395 07:08:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:08.395 07:08:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:08.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.395 --rc genhtml_branch_coverage=1 00:28:08.395 --rc genhtml_function_coverage=1 00:28:08.395 --rc genhtml_legend=1 00:28:08.395 --rc geninfo_all_blocks=1 00:28:08.395 --rc geninfo_unexecuted_blocks=1 00:28:08.395 00:28:08.395 ' 00:28:08.395 07:08:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:08.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.395 --rc genhtml_branch_coverage=1 00:28:08.395 --rc genhtml_function_coverage=1 00:28:08.395 --rc genhtml_legend=1 00:28:08.395 --rc geninfo_all_blocks=1 00:28:08.395 --rc geninfo_unexecuted_blocks=1 00:28:08.395 00:28:08.395 ' 00:28:08.395 07:08:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:08.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.395 --rc genhtml_branch_coverage=1 00:28:08.395 --rc genhtml_function_coverage=1 00:28:08.395 --rc genhtml_legend=1 00:28:08.395 --rc geninfo_all_blocks=1 00:28:08.395 --rc geninfo_unexecuted_blocks=1 00:28:08.395 00:28:08.395 ' 00:28:08.395 07:08:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:08.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.395 --rc genhtml_branch_coverage=1 00:28:08.395 --rc genhtml_function_coverage=1 00:28:08.395 --rc genhtml_legend=1 00:28:08.395 --rc geninfo_all_blocks=1 00:28:08.395 --rc geninfo_unexecuted_blocks=1 00:28:08.395 00:28:08.395 ' 00:28:08.395 07:08:30 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.395 07:08:30 -- nvmf/common.sh@7 -- # uname -s 00:28:08.395 07:08:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.395 07:08:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.395 07:08:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.395 07:08:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.395 07:08:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.395 07:08:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.395 07:08:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.395 07:08:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.395 07:08:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.395 07:08:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.395 07:08:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:08.395 07:08:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:08.395 07:08:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.395 07:08:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.395 07:08:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.395 07:08:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:08.654 07:08:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.654 07:08:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.654 07:08:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.654 07:08:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.654 07:08:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.654 07:08:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.654 07:08:30 -- paths/export.sh@5 -- # export PATH 00:28:08.654 07:08:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.654 07:08:30 -- nvmf/common.sh@46 -- # : 0 00:28:08.654 07:08:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:08.654 07:08:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:08.654 07:08:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:08.654 07:08:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.654 07:08:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.654 07:08:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:08.654 07:08:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:08.654 07:08:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:08.654 07:08:30 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:28:08.654 07:08:30 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:08.654 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:08.654 07:08:30 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:28:08.654 00:28:08.654 real 0m0.218s 00:28:08.654 user 0m0.121s 00:28:08.654 sys 0m0.113s 00:28:08.654 07:08:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:08.654 07:08:30 -- common/autotest_common.sh@10 -- # set +x 00:28:08.654 ************************************ 00:28:08.654 END TEST nvmf_discovery_remove_ifc 00:28:08.654 ************************************ 00:28:08.654 07:08:30 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:28:08.654 07:08:30 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:08.654 07:08:30 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:08.654 07:08:30 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:08.654 07:08:30 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:08.654 07:08:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:08.654 07:08:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:08.654 07:08:30 -- common/autotest_common.sh@10 -- # set +x 00:28:08.654 ************************************ 00:28:08.654 START TEST nvmf_bdevperf 00:28:08.654 ************************************ 00:28:08.654 07:08:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:08.654 * Looking for test storage... 00:28:08.654 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:08.654 07:08:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:08.654 07:08:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:08.654 07:08:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:08.654 07:08:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:08.654 07:08:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:08.654 07:08:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:08.654 07:08:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:08.654 07:08:30 -- scripts/common.sh@335 -- # IFS=.-: 00:28:08.654 07:08:30 -- scripts/common.sh@335 -- # read -ra ver1 00:28:08.654 07:08:30 -- scripts/common.sh@336 -- # IFS=.-: 00:28:08.654 07:08:30 -- scripts/common.sh@336 -- # read -ra ver2 00:28:08.654 07:08:30 -- scripts/common.sh@337 -- # local 'op=<' 00:28:08.654 07:08:30 -- scripts/common.sh@339 -- # ver1_l=2 00:28:08.654 07:08:30 -- scripts/common.sh@340 -- # ver2_l=1 00:28:08.654 07:08:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:08.654 07:08:30 -- scripts/common.sh@343 -- # case "$op" in 00:28:08.654 07:08:30 -- scripts/common.sh@344 -- # : 1 00:28:08.654 07:08:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:08.654 07:08:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:08.654 07:08:30 -- scripts/common.sh@364 -- # decimal 1 00:28:08.654 07:08:30 -- scripts/common.sh@352 -- # local d=1 00:28:08.654 07:08:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:08.654 07:08:30 -- scripts/common.sh@354 -- # echo 1 00:28:08.654 07:08:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:08.654 07:08:30 -- scripts/common.sh@365 -- # decimal 2 00:28:08.654 07:08:30 -- scripts/common.sh@352 -- # local d=2 00:28:08.654 07:08:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:08.654 07:08:30 -- scripts/common.sh@354 -- # echo 2 00:28:08.654 07:08:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:08.654 07:08:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:08.654 07:08:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:08.654 07:08:30 -- scripts/common.sh@367 -- # return 0 00:28:08.654 07:08:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:08.654 07:08:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:08.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.654 --rc genhtml_branch_coverage=1 00:28:08.654 --rc genhtml_function_coverage=1 00:28:08.654 --rc genhtml_legend=1 00:28:08.654 --rc geninfo_all_blocks=1 00:28:08.654 --rc geninfo_unexecuted_blocks=1 00:28:08.654 00:28:08.654 ' 00:28:08.654 07:08:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:08.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.654 --rc genhtml_branch_coverage=1 00:28:08.654 --rc genhtml_function_coverage=1 00:28:08.654 --rc genhtml_legend=1 00:28:08.654 --rc geninfo_all_blocks=1 00:28:08.654 --rc geninfo_unexecuted_blocks=1 00:28:08.654 00:28:08.654 ' 00:28:08.654 07:08:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:08.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.654 --rc genhtml_branch_coverage=1 00:28:08.654 --rc genhtml_function_coverage=1 00:28:08.654 --rc genhtml_legend=1 00:28:08.654 --rc geninfo_all_blocks=1 00:28:08.654 --rc geninfo_unexecuted_blocks=1 00:28:08.654 00:28:08.654 ' 00:28:08.654 07:08:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:08.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.654 --rc genhtml_branch_coverage=1 00:28:08.655 --rc genhtml_function_coverage=1 00:28:08.655 --rc genhtml_legend=1 00:28:08.655 --rc geninfo_all_blocks=1 00:28:08.655 --rc geninfo_unexecuted_blocks=1 00:28:08.655 00:28:08.655 ' 00:28:08.655 07:08:30 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.655 07:08:30 -- nvmf/common.sh@7 -- # uname -s 00:28:08.655 07:08:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.655 07:08:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.655 07:08:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.655 07:08:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.655 07:08:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.655 07:08:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.655 07:08:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.655 07:08:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.655 07:08:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.655 07:08:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.655 07:08:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:08.655 07:08:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:08.655 07:08:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.655 07:08:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.655 07:08:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.655 07:08:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:08.914 07:08:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.914 07:08:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.914 07:08:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.914 07:08:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.914 07:08:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.914 07:08:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.914 07:08:30 -- paths/export.sh@5 -- # export PATH 00:28:08.914 07:08:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.914 07:08:30 -- nvmf/common.sh@46 -- # : 0 00:28:08.914 07:08:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:08.914 07:08:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:08.914 07:08:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:08.914 07:08:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.914 07:08:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.914 07:08:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:08.914 07:08:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:08.914 07:08:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:08.914 07:08:30 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:08.914 07:08:30 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:08.914 07:08:30 -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:08.914 07:08:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:08.914 07:08:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.914 07:08:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:08.914 07:08:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:08.914 07:08:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:08.914 07:08:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.914 07:08:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.914 07:08:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.914 07:08:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:08.914 07:08:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:08.914 07:08:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:08.914 07:08:30 -- common/autotest_common.sh@10 -- # set +x 00:28:15.588 07:08:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:15.588 07:08:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:15.588 07:08:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:15.588 07:08:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:15.588 07:08:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:15.588 07:08:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:15.588 07:08:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:15.588 07:08:36 -- nvmf/common.sh@294 -- # net_devs=() 00:28:15.588 07:08:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:15.588 07:08:36 -- nvmf/common.sh@295 -- # e810=() 00:28:15.588 07:08:36 -- nvmf/common.sh@295 -- # local -ga e810 00:28:15.588 07:08:36 -- nvmf/common.sh@296 -- # x722=() 00:28:15.588 07:08:36 -- nvmf/common.sh@296 -- # local -ga x722 00:28:15.588 07:08:36 -- nvmf/common.sh@297 -- # mlx=() 00:28:15.588 07:08:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:15.588 07:08:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.588 07:08:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.588 07:08:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.588 07:08:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.588 07:08:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.588 07:08:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.588 07:08:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.588 07:08:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.588 07:08:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.588 07:08:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.588 07:08:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.588 07:08:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:15.588 07:08:36 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:15.588 07:08:36 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:15.588 07:08:36 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:15.588 07:08:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:15.588 07:08:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:15.588 07:08:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:15.588 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:15.588 07:08:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:15.588 07:08:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:15.588 07:08:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:15.588 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:15.588 07:08:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:15.588 07:08:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:15.588 07:08:36 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:15.588 07:08:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.588 07:08:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:15.588 07:08:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.588 07:08:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:15.588 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:15.588 07:08:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.588 07:08:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:15.588 07:08:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.588 07:08:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:15.588 07:08:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.588 07:08:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:15.588 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:15.588 07:08:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.588 07:08:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:15.588 07:08:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:15.588 07:08:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:15.588 07:08:36 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:15.588 07:08:36 -- nvmf/common.sh@57 -- # uname 00:28:15.588 07:08:36 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:15.588 07:08:36 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:15.588 07:08:36 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:15.588 07:08:36 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:15.588 07:08:36 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:15.588 07:08:36 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:15.588 07:08:36 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:15.588 07:08:36 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:15.588 07:08:36 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:15.588 07:08:36 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:15.588 07:08:36 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:15.588 07:08:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:15.588 07:08:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:15.588 07:08:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:15.588 07:08:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:15.588 07:08:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:15.588 07:08:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:15.588 07:08:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:15.588 07:08:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:15.588 07:08:36 -- nvmf/common.sh@104 -- # continue 2 00:28:15.588 07:08:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:15.588 07:08:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:15.588 07:08:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:15.588 07:08:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:15.588 07:08:36 -- nvmf/common.sh@104 -- # continue 2 00:28:15.588 07:08:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:15.588 07:08:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:15.588 07:08:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:15.588 07:08:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:15.588 07:08:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:15.588 07:08:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:15.588 07:08:36 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:15.588 07:08:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:15.588 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:15.588 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:15.588 altname enp217s0f0np0 00:28:15.588 altname ens818f0np0 00:28:15.588 inet 192.168.100.8/24 scope global mlx_0_0 00:28:15.588 valid_lft forever preferred_lft forever 00:28:15.588 07:08:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:15.588 07:08:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:15.588 07:08:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:15.588 07:08:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:15.588 07:08:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:15.588 07:08:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:15.588 07:08:36 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:15.588 07:08:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:15.588 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:15.588 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:15.588 altname enp217s0f1np1 00:28:15.588 altname ens818f1np1 00:28:15.588 inet 192.168.100.9/24 scope global mlx_0_1 00:28:15.588 valid_lft forever preferred_lft forever 00:28:15.588 07:08:36 -- nvmf/common.sh@410 -- # return 0 00:28:15.588 07:08:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:15.588 07:08:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:15.588 07:08:36 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:15.588 07:08:36 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:15.588 07:08:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:15.588 07:08:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:15.588 07:08:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:15.588 07:08:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:15.588 07:08:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:15.588 07:08:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:15.588 07:08:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:15.588 07:08:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:15.588 07:08:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:15.589 07:08:36 -- nvmf/common.sh@104 -- # continue 2 00:28:15.589 07:08:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:15.589 07:08:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:15.589 07:08:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:15.589 07:08:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:15.589 07:08:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:15.589 07:08:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:15.589 07:08:36 -- nvmf/common.sh@104 -- # continue 2 00:28:15.589 07:08:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:15.589 07:08:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:15.589 07:08:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:15.589 07:08:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:15.589 07:08:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:15.589 07:08:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:15.589 07:08:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:15.589 07:08:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:15.589 07:08:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:15.589 07:08:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:15.589 07:08:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:15.589 07:08:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:15.589 07:08:36 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:15.589 192.168.100.9' 00:28:15.589 07:08:36 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:15.589 192.168.100.9' 00:28:15.589 07:08:36 -- nvmf/common.sh@445 -- # head -n 1 00:28:15.589 07:08:36 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:15.589 07:08:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:15.589 192.168.100.9' 00:28:15.589 07:08:36 -- nvmf/common.sh@446 -- # head -n 1 00:28:15.589 07:08:36 -- nvmf/common.sh@446 -- # tail -n +2 00:28:15.589 07:08:36 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:15.589 07:08:36 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:15.589 07:08:36 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:15.589 07:08:36 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:15.589 07:08:36 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:15.589 07:08:36 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:15.589 07:08:36 -- host/bdevperf.sh@25 -- # tgt_init 00:28:15.589 07:08:36 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:15.589 07:08:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:15.589 07:08:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:15.589 07:08:36 -- common/autotest_common.sh@10 -- # set +x 00:28:15.589 07:08:36 -- nvmf/common.sh@469 -- # nvmfpid=1497484 00:28:15.589 07:08:36 -- nvmf/common.sh@470 -- # waitforlisten 1497484 00:28:15.589 07:08:36 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:15.589 07:08:36 -- common/autotest_common.sh@829 -- # '[' -z 1497484 ']' 00:28:15.589 07:08:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.589 07:08:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:15.589 07:08:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.589 07:08:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:15.589 07:08:36 -- common/autotest_common.sh@10 -- # set +x 00:28:15.589 [2024-12-15 07:08:36.738083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:15.589 [2024-12-15 07:08:36.738139] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.589 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.589 [2024-12-15 07:08:36.813937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:15.589 [2024-12-15 07:08:36.854141] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:15.589 [2024-12-15 07:08:36.854251] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.589 [2024-12-15 07:08:36.854262] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.589 [2024-12-15 07:08:36.854271] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.589 [2024-12-15 07:08:36.854365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.589 [2024-12-15 07:08:36.854483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.589 [2024-12-15 07:08:36.854485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.156 07:08:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:16.156 07:08:37 -- common/autotest_common.sh@862 -- # return 0 00:28:16.156 07:08:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:16.156 07:08:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:16.156 07:08:37 -- common/autotest_common.sh@10 -- # set +x 00:28:16.156 07:08:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.156 07:08:37 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:16.156 07:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.156 07:08:37 -- common/autotest_common.sh@10 -- # set +x 00:28:16.156 [2024-12-15 07:08:37.634218] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2079900/0x207ddb0) succeed. 00:28:16.156 [2024-12-15 07:08:37.643324] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x207ae00/0x20bf450) succeed. 00:28:16.156 07:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.156 07:08:37 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:16.156 07:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.156 07:08:37 -- common/autotest_common.sh@10 -- # set +x 00:28:16.156 Malloc0 00:28:16.156 07:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.156 07:08:37 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.156 07:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.156 07:08:37 -- common/autotest_common.sh@10 -- # set +x 00:28:16.156 07:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.156 07:08:37 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:16.156 07:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.156 07:08:37 -- common/autotest_common.sh@10 -- # set +x 00:28:16.156 07:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.156 07:08:37 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:16.156 07:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.156 07:08:37 -- common/autotest_common.sh@10 -- # set +x 00:28:16.415 [2024-12-15 07:08:37.797610] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:16.415 07:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.415 07:08:37 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:16.415 07:08:37 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:16.415 07:08:37 -- nvmf/common.sh@520 -- # config=() 00:28:16.415 07:08:37 -- nvmf/common.sh@520 -- # local subsystem config 00:28:16.415 07:08:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:16.415 07:08:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:16.415 { 00:28:16.415 "params": { 00:28:16.415 "name": "Nvme$subsystem", 00:28:16.415 "trtype": "$TEST_TRANSPORT", 00:28:16.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.415 "adrfam": "ipv4", 00:28:16.415 "trsvcid": "$NVMF_PORT", 00:28:16.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.415 "hdgst": ${hdgst:-false}, 00:28:16.415 "ddgst": ${ddgst:-false} 00:28:16.415 }, 00:28:16.415 "method": "bdev_nvme_attach_controller" 00:28:16.415 } 00:28:16.415 EOF 00:28:16.415 )") 00:28:16.415 07:08:37 -- nvmf/common.sh@542 -- # cat 00:28:16.415 07:08:37 -- nvmf/common.sh@544 -- # jq . 00:28:16.415 07:08:37 -- nvmf/common.sh@545 -- # IFS=, 00:28:16.415 07:08:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:16.415 "params": { 00:28:16.415 "name": "Nvme1", 00:28:16.415 "trtype": "rdma", 00:28:16.415 "traddr": "192.168.100.8", 00:28:16.415 "adrfam": "ipv4", 00:28:16.415 "trsvcid": "4420", 00:28:16.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:16.415 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:16.415 "hdgst": false, 00:28:16.415 "ddgst": false 00:28:16.415 }, 00:28:16.415 "method": "bdev_nvme_attach_controller" 00:28:16.415 }' 00:28:16.415 [2024-12-15 07:08:37.848317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:16.415 [2024-12-15 07:08:37.848364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497771 ] 00:28:16.415 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.415 [2024-12-15 07:08:37.920028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.415 [2024-12-15 07:08:37.956619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.673 Running I/O for 1 seconds... 00:28:17.609 00:28:17.609 Latency(us) 00:28:17.609 [2024-12-15T06:08:39.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.609 [2024-12-15T06:08:39.250Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:17.609 Verification LBA range: start 0x0 length 0x4000 00:28:17.609 Nvme1n1 : 1.00 25730.26 100.51 0.00 0.00 4951.06 1245.18 12006.20 00:28:17.609 [2024-12-15T06:08:39.250Z] =================================================================================================================== 00:28:17.609 [2024-12-15T06:08:39.250Z] Total : 25730.26 100.51 0.00 0.00 4951.06 1245.18 12006.20 00:28:17.951 07:08:39 -- host/bdevperf.sh@30 -- # bdevperfpid=1498047 00:28:17.951 07:08:39 -- host/bdevperf.sh@32 -- # sleep 3 00:28:17.951 07:08:39 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:17.951 07:08:39 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:17.951 07:08:39 -- nvmf/common.sh@520 -- # config=() 00:28:17.951 07:08:39 -- nvmf/common.sh@520 -- # local subsystem config 00:28:17.951 07:08:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:17.951 07:08:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:17.951 { 00:28:17.951 "params": { 00:28:17.951 "name": "Nvme$subsystem", 00:28:17.951 "trtype": "$TEST_TRANSPORT", 00:28:17.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.951 "adrfam": "ipv4", 00:28:17.951 "trsvcid": "$NVMF_PORT", 00:28:17.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.951 "hdgst": ${hdgst:-false}, 00:28:17.951 "ddgst": ${ddgst:-false} 00:28:17.951 }, 00:28:17.951 "method": "bdev_nvme_attach_controller" 00:28:17.951 } 00:28:17.951 EOF 00:28:17.951 )") 00:28:17.951 07:08:39 -- nvmf/common.sh@542 -- # cat 00:28:17.951 07:08:39 -- nvmf/common.sh@544 -- # jq . 00:28:17.951 07:08:39 -- nvmf/common.sh@545 -- # IFS=, 00:28:17.951 07:08:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:17.951 "params": { 00:28:17.951 "name": "Nvme1", 00:28:17.951 "trtype": "rdma", 00:28:17.951 "traddr": "192.168.100.8", 00:28:17.951 "adrfam": "ipv4", 00:28:17.951 "trsvcid": "4420", 00:28:17.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:17.951 "hdgst": false, 00:28:17.951 "ddgst": false 00:28:17.951 }, 00:28:17.951 "method": "bdev_nvme_attach_controller" 00:28:17.951 }' 00:28:17.951 [2024-12-15 07:08:39.373913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:17.951 [2024-12-15 07:08:39.373970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498047 ] 00:28:17.951 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.951 [2024-12-15 07:08:39.444599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.951 [2024-12-15 07:08:39.478551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.209 Running I/O for 15 seconds... 00:28:20.739 07:08:42 -- host/bdevperf.sh@33 -- # kill -9 1497484 00:28:20.739 07:08:42 -- host/bdevperf.sh@35 -- # sleep 3 00:28:22.117 [2024-12-15 07:08:43.362487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x181300 00:28:22.117 [2024-12-15 07:08:43.362526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181300 00:28:22.117 [2024-12-15 07:08:43.362571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x183b00 00:28:22.117 [2024-12-15 07:08:43.362592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181300 00:28:22.117 [2024-12-15 07:08:43.362611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.117 [2024-12-15 07:08:43.362631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x181300 00:28:22.117 [2024-12-15 07:08:43.362650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.117 [2024-12-15 07:08:43.362669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.117 [2024-12-15 07:08:43.362688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.117 [2024-12-15 07:08:43.362706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x183b00 00:28:22.117 [2024-12-15 07:08:43.362725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.117 [2024-12-15 07:08:43.362749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x183b00 00:28:22.117 [2024-12-15 07:08:43.362768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x183b00 00:28:22.117 [2024-12-15 07:08:43.362787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x183b00 00:28:22.117 [2024-12-15 07:08:43.362806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.117 [2024-12-15 07:08:43.362825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e4f80 len:0x1000 key:0x183b00 00:28:22.117 [2024-12-15 07:08:43.362844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.117 [2024-12-15 07:08:43.362864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181300 00:28:22.117 [2024-12-15 07:08:43.362882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181300 00:28:22.117 [2024-12-15 07:08:43.362901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x181300 00:28:22.117 [2024-12-15 07:08:43.362920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.117 [2024-12-15 07:08:43.362939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x183b00 00:28:22.117 [2024-12-15 07:08:43.362960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.117 [2024-12-15 07:08:43.362970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181300 00:28:22.117 [2024-12-15 07:08:43.362988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.362998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x181300 00:28:22.118 [2024-12-15 07:08:43.363006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181300 00:28:22.118 [2024-12-15 07:08:43.363082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.118 [2024-12-15 07:08:43.363139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x181300 00:28:22.118 [2024-12-15 07:08:43.363157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.118 [2024-12-15 07:08:43.363178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x181300 00:28:22.118 [2024-12-15 07:08:43.363196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181300 00:28:22.118 [2024-12-15 07:08:43.363234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x181300 00:28:22.118 [2024-12-15 07:08:43.363252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181300 00:28:22.118 [2024-12-15 07:08:43.363271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.118 [2024-12-15 07:08:43.363289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.118 [2024-12-15 07:08:43.363345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181300 00:28:22.118 [2024-12-15 07:08:43.363364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.118 [2024-12-15 07:08:43.363382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.118 [2024-12-15 07:08:43.363403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x181300 00:28:22.118 [2024-12-15 07:08:43.363421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.118 [2024-12-15 07:08:43.363439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c2f00 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181300 00:28:22.118 [2024-12-15 07:08:43.363515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181300 00:28:22.118 [2024-12-15 07:08:43.363613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.118 [2024-12-15 07:08:43.363680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x183b00 00:28:22.118 [2024-12-15 07:08:43.363689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181300 00:28:22.119 [2024-12-15 07:08:43.363708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.119 [2024-12-15 07:08:43.363726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181300 00:28:22.119 [2024-12-15 07:08:43.363745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.363764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.363783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.119 [2024-12-15 07:08:43.363801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.363819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.119 [2024-12-15 07:08:43.363840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.363858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.363876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.363895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.363914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.119 [2024-12-15 07:08:43.363932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181300 00:28:22.119 [2024-12-15 07:08:43.363951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181300 00:28:22.119 [2024-12-15 07:08:43.363970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.363986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181300 00:28:22.119 [2024-12-15 07:08:43.363995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.364014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x181300 00:28:22.119 [2024-12-15 07:08:43.364033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181300 00:28:22.119 [2024-12-15 07:08:43.364054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.364072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.364094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181300 00:28:22.119 [2024-12-15 07:08:43.364113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.119 [2024-12-15 07:08:43.364132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.364151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.364170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.364188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.119 [2024-12-15 07:08:43.364207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.364228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.119 [2024-12-15 07:08:43.364246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.119 [2024-12-15 07:08:43.364265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.364285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x183b00 00:28:22.119 [2024-12-15 07:08:43.364303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.119 [2024-12-15 07:08:43.364322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181300 00:28:22.119 [2024-12-15 07:08:43.364341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x181300 00:28:22.119 [2024-12-15 07:08:43.364359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.119 [2024-12-15 07:08:43.364378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.119 [2024-12-15 07:08:43.364388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.119 [2024-12-15 07:08:43.364398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x183b00 00:28:22.120 [2024-12-15 07:08:43.364416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388f600 len:0x1000 key:0x183b00 00:28:22.120 [2024-12-15 07:08:43.364435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x183b00 00:28:22.120 [2024-12-15 07:08:43.364473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x183b00 00:28:22.120 [2024-12-15 07:08:43.364531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.120 [2024-12-15 07:08:43.364549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.120 [2024-12-15 07:08:43.364568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013886180 len:0x1000 key:0x183b00 00:28:22.120 [2024-12-15 07:08:43.364605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x183b00 00:28:22.120 [2024-12-15 07:08:43.364624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x183b00 00:28:22.120 [2024-12-15 07:08:43.364701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.120 [2024-12-15 07:08:43.364719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x183b00 00:28:22.120 [2024-12-15 07:08:43.364738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.120 [2024-12-15 07:08:43.364775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x183b00 00:28:22.120 [2024-12-15 07:08:43.364812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.120 [2024-12-15 07:08:43.364830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.120 [2024-12-15 07:08:43.364904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.364924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.364934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x181300 00:28:22.120 [2024-12-15 07:08:43.374431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.374468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.120 [2024-12-15 07:08:43.374481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:27890 cdw0:22224000 sqhd:c10c p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.376387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.120 [2024-12-15 07:08:43.376403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.120 [2024-12-15 07:08:43.376414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28160 len:8 PRP1 0x0 PRP2 0x0 00:28:22.120 [2024-12-15 07:08:43.376425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.376472] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:22.120 [2024-12-15 07:08:43.376512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.120 [2024-12-15 07:08:43.376525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27890 cdw0:0 sqhd:c4ee p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.376537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.120 [2024-12-15 07:08:43.376548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27890 cdw0:0 sqhd:c4ee p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.376560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.120 [2024-12-15 07:08:43.376570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27890 cdw0:0 sqhd:c4ee p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.376582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.120 [2024-12-15 07:08:43.376593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:27890 cdw0:0 sqhd:c4ee p:0 m:0 dnr:0 00:28:22.120 [2024-12-15 07:08:43.394651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:22.120 [2024-12-15 07:08:43.394708] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.120 [2024-12-15 07:08:43.394719] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.121 [2024-12-15 07:08:43.396423] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.121 [2024-12-15 07:08:43.398673] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:22.121 [2024-12-15 07:08:43.398692] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:22.121 [2024-12-15 07:08:43.398701] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:23.056 [2024-12-15 07:08:44.402767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:23.056 [2024-12-15 07:08:44.402835] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.056 [2024-12-15 07:08:44.403011] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.056 [2024-12-15 07:08:44.403023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.056 [2024-12-15 07:08:44.403033] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:23.056 [2024-12-15 07:08:44.403788] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:23.056 [2024-12-15 07:08:44.404604] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.056 [2024-12-15 07:08:44.415609] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.056 [2024-12-15 07:08:44.417929] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:23.056 [2024-12-15 07:08:44.417948] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:23.056 [2024-12-15 07:08:44.417957] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:23.991 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1497484 Killed "${NVMF_APP[@]}" "$@" 00:28:23.991 07:08:45 -- host/bdevperf.sh@36 -- # tgt_init 00:28:23.991 07:08:45 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:23.991 07:08:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:23.991 07:08:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:23.991 07:08:45 -- common/autotest_common.sh@10 -- # set +x 00:28:23.991 07:08:45 -- nvmf/common.sh@469 -- # nvmfpid=1499098 00:28:23.991 07:08:45 -- nvmf/common.sh@470 -- # waitforlisten 1499098 00:28:23.991 07:08:45 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:23.991 07:08:45 -- common/autotest_common.sh@829 -- # '[' -z 1499098 ']' 00:28:23.991 07:08:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.991 07:08:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.991 07:08:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.991 07:08:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.991 07:08:45 -- common/autotest_common.sh@10 -- # set +x 00:28:23.991 [2024-12-15 07:08:45.395202] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:23.991 [2024-12-15 07:08:45.395253] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.991 [2024-12-15 07:08:45.421863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:23.991 [2024-12-15 07:08:45.421888] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.991 [2024-12-15 07:08:45.422011] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.991 [2024-12-15 07:08:45.422023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.991 [2024-12-15 07:08:45.422034] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:23.991 [2024-12-15 07:08:45.422324] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:23.991 [2024-12-15 07:08:45.423790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.991 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.991 [2024-12-15 07:08:45.434231] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.991 [2024-12-15 07:08:45.436427] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:23.991 [2024-12-15 07:08:45.436448] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:23.991 [2024-12-15 07:08:45.436456] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:23.991 [2024-12-15 07:08:45.467289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:23.991 [2024-12-15 07:08:45.504806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:23.991 [2024-12-15 07:08:45.504932] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.991 [2024-12-15 07:08:45.504943] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.991 [2024-12-15 07:08:45.504952] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.991 [2024-12-15 07:08:45.504999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.991 [2024-12-15 07:08:45.505100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:23.991 [2024-12-15 07:08:45.505102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.926 07:08:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.926 07:08:46 -- common/autotest_common.sh@862 -- # return 0 00:28:24.926 07:08:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:24.926 07:08:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:24.926 07:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:24.926 07:08:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.926 07:08:46 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:24.926 07:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.926 07:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:24.926 [2024-12-15 07:08:46.291235] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16f9900/0x16fddb0) succeed. 00:28:24.926 [2024-12-15 07:08:46.300470] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16fae00/0x173f450) succeed. 00:28:24.926 07:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.926 07:08:46 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:24.926 07:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.926 07:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:24.926 Malloc0 00:28:24.926 07:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.926 07:08:46 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.926 07:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.926 07:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:24.926 07:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.926 07:08:46 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:24.926 07:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.926 07:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:24.926 07:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.926 07:08:46 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:24.926 07:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.926 07:08:46 -- common/autotest_common.sh@10 -- # set +x 00:28:24.926 [2024-12-15 07:08:46.440421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:24.926 [2024-12-15 07:08:46.440453] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.926 [2024-12-15 07:08:46.440599] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.926 [2024-12-15 07:08:46.440612] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.926 [2024-12-15 07:08:46.440622] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:24.926 [2024-12-15 07:08:46.441405] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:24.926 [2024-12-15 07:08:46.441921] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:24.926 [2024-12-15 07:08:46.442329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.926 07:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.926 07:08:46 -- host/bdevperf.sh@38 -- # wait 1498047 00:28:24.926 [2024-12-15 07:08:46.453733] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.926 [2024-12-15 07:08:46.483448] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:33.056 00:28:33.056 Latency(us) 00:28:33.056 [2024-12-15T06:08:54.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.056 [2024-12-15T06:08:54.697Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:33.056 Verification LBA range: start 0x0 length 0x4000 00:28:33.056 Nvme1n1 : 15.00 18604.18 72.67 16518.16 0.00 3633.02 378.47 1060320.05 00:28:33.056 [2024-12-15T06:08:54.697Z] =================================================================================================================== 00:28:33.056 [2024-12-15T06:08:54.697Z] Total : 18604.18 72.67 16518.16 0.00 3633.02 378.47 1060320.05 00:28:33.315 07:08:54 -- host/bdevperf.sh@39 -- # sync 00:28:33.315 07:08:54 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:33.315 07:08:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.315 07:08:54 -- common/autotest_common.sh@10 -- # set +x 00:28:33.315 07:08:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.315 07:08:54 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:33.315 07:08:54 -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:33.315 07:08:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:33.315 07:08:54 -- nvmf/common.sh@116 -- # sync 00:28:33.315 07:08:54 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:33.315 07:08:54 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:33.315 07:08:54 -- nvmf/common.sh@119 -- # set +e 00:28:33.315 07:08:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:33.315 07:08:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:33.315 rmmod nvme_rdma 00:28:33.315 rmmod nvme_fabrics 00:28:33.315 07:08:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:33.315 07:08:54 -- nvmf/common.sh@123 -- # set -e 00:28:33.315 07:08:54 -- nvmf/common.sh@124 -- # return 0 00:28:33.315 07:08:54 -- nvmf/common.sh@477 -- # '[' -n 1499098 ']' 00:28:33.315 07:08:54 -- nvmf/common.sh@478 -- # killprocess 1499098 00:28:33.315 07:08:54 -- common/autotest_common.sh@936 -- # '[' -z 1499098 ']' 00:28:33.315 07:08:54 -- common/autotest_common.sh@940 -- # kill -0 1499098 00:28:33.574 07:08:54 -- common/autotest_common.sh@941 -- # uname 00:28:33.574 07:08:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:33.574 07:08:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1499098 00:28:33.574 07:08:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:33.574 07:08:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:33.574 07:08:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1499098' 00:28:33.574 killing process with pid 1499098 00:28:33.574 07:08:55 -- common/autotest_common.sh@955 -- # kill 1499098 00:28:33.574 07:08:55 -- common/autotest_common.sh@960 -- # wait 1499098 00:28:33.832 07:08:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:33.832 07:08:55 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:33.832 00:28:33.832 real 0m25.173s 00:28:33.832 user 1m4.218s 00:28:33.832 sys 0m6.209s 00:28:33.832 07:08:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:33.832 07:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:33.832 ************************************ 00:28:33.832 END TEST nvmf_bdevperf 00:28:33.832 ************************************ 00:28:33.832 07:08:55 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:33.832 07:08:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:33.832 07:08:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:33.832 07:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:33.832 ************************************ 00:28:33.832 START TEST nvmf_target_disconnect 00:28:33.832 ************************************ 00:28:33.832 07:08:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:28:33.832 * Looking for test storage... 00:28:33.832 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:33.832 07:08:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:33.832 07:08:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:33.832 07:08:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:34.090 07:08:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:34.090 07:08:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:34.090 07:08:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:34.090 07:08:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:34.090 07:08:55 -- scripts/common.sh@335 -- # IFS=.-: 00:28:34.090 07:08:55 -- scripts/common.sh@335 -- # read -ra ver1 00:28:34.090 07:08:55 -- scripts/common.sh@336 -- # IFS=.-: 00:28:34.090 07:08:55 -- scripts/common.sh@336 -- # read -ra ver2 00:28:34.090 07:08:55 -- scripts/common.sh@337 -- # local 'op=<' 00:28:34.090 07:08:55 -- scripts/common.sh@339 -- # ver1_l=2 00:28:34.090 07:08:55 -- scripts/common.sh@340 -- # ver2_l=1 00:28:34.090 07:08:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:34.090 07:08:55 -- scripts/common.sh@343 -- # case "$op" in 00:28:34.090 07:08:55 -- scripts/common.sh@344 -- # : 1 00:28:34.090 07:08:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:34.090 07:08:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:34.090 07:08:55 -- scripts/common.sh@364 -- # decimal 1 00:28:34.090 07:08:55 -- scripts/common.sh@352 -- # local d=1 00:28:34.090 07:08:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:34.090 07:08:55 -- scripts/common.sh@354 -- # echo 1 00:28:34.090 07:08:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:34.090 07:08:55 -- scripts/common.sh@365 -- # decimal 2 00:28:34.090 07:08:55 -- scripts/common.sh@352 -- # local d=2 00:28:34.090 07:08:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:34.090 07:08:55 -- scripts/common.sh@354 -- # echo 2 00:28:34.090 07:08:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:34.090 07:08:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:34.090 07:08:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:34.091 07:08:55 -- scripts/common.sh@367 -- # return 0 00:28:34.091 07:08:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:34.091 07:08:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:34.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.091 --rc genhtml_branch_coverage=1 00:28:34.091 --rc genhtml_function_coverage=1 00:28:34.091 --rc genhtml_legend=1 00:28:34.091 --rc geninfo_all_blocks=1 00:28:34.091 --rc geninfo_unexecuted_blocks=1 00:28:34.091 00:28:34.091 ' 00:28:34.091 07:08:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:34.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.091 --rc genhtml_branch_coverage=1 00:28:34.091 --rc genhtml_function_coverage=1 00:28:34.091 --rc genhtml_legend=1 00:28:34.091 --rc geninfo_all_blocks=1 00:28:34.091 --rc geninfo_unexecuted_blocks=1 00:28:34.091 00:28:34.091 ' 00:28:34.091 07:08:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:34.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.091 --rc genhtml_branch_coverage=1 00:28:34.091 --rc genhtml_function_coverage=1 00:28:34.091 --rc genhtml_legend=1 00:28:34.091 --rc geninfo_all_blocks=1 00:28:34.091 --rc geninfo_unexecuted_blocks=1 00:28:34.091 00:28:34.091 ' 00:28:34.091 07:08:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:34.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.091 --rc genhtml_branch_coverage=1 00:28:34.091 --rc genhtml_function_coverage=1 00:28:34.091 --rc genhtml_legend=1 00:28:34.091 --rc geninfo_all_blocks=1 00:28:34.091 --rc geninfo_unexecuted_blocks=1 00:28:34.091 00:28:34.091 ' 00:28:34.091 07:08:55 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.091 07:08:55 -- nvmf/common.sh@7 -- # uname -s 00:28:34.091 07:08:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.091 07:08:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.091 07:08:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.091 07:08:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.091 07:08:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.091 07:08:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.091 07:08:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.091 07:08:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.091 07:08:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.091 07:08:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.091 07:08:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:34.091 07:08:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:34.091 07:08:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.091 07:08:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.091 07:08:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.091 07:08:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:34.091 07:08:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.091 07:08:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.091 07:08:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.091 07:08:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.091 07:08:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.091 07:08:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.091 07:08:55 -- paths/export.sh@5 -- # export PATH 00:28:34.091 07:08:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.091 07:08:55 -- nvmf/common.sh@46 -- # : 0 00:28:34.091 07:08:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:34.091 07:08:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:34.091 07:08:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:34.091 07:08:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.091 07:08:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.091 07:08:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:34.091 07:08:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:34.091 07:08:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:34.091 07:08:55 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:28:34.091 07:08:55 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:34.091 07:08:55 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:34.091 07:08:55 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:28:34.091 07:08:55 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:34.091 07:08:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.091 07:08:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:34.091 07:08:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:34.091 07:08:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:34.091 07:08:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.091 07:08:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.091 07:08:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.091 07:08:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:34.091 07:08:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:34.091 07:08:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:34.091 07:08:55 -- common/autotest_common.sh@10 -- # set +x 00:28:40.654 07:09:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:40.654 07:09:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:40.654 07:09:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:40.654 07:09:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:40.654 07:09:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:40.654 07:09:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:40.654 07:09:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:40.654 07:09:01 -- nvmf/common.sh@294 -- # net_devs=() 00:28:40.654 07:09:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:40.654 07:09:01 -- nvmf/common.sh@295 -- # e810=() 00:28:40.654 07:09:01 -- nvmf/common.sh@295 -- # local -ga e810 00:28:40.654 07:09:01 -- nvmf/common.sh@296 -- # x722=() 00:28:40.654 07:09:01 -- nvmf/common.sh@296 -- # local -ga x722 00:28:40.654 07:09:01 -- nvmf/common.sh@297 -- # mlx=() 00:28:40.654 07:09:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:40.654 07:09:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.654 07:09:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.654 07:09:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.654 07:09:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.654 07:09:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.654 07:09:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.654 07:09:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.654 07:09:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.654 07:09:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.654 07:09:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.654 07:09:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.654 07:09:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:40.654 07:09:01 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:40.654 07:09:01 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:40.654 07:09:01 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:40.654 07:09:01 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:40.654 07:09:01 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:40.654 07:09:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:40.654 07:09:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:40.654 07:09:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:40.654 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:40.654 07:09:01 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:40.654 07:09:01 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:40.654 07:09:01 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:40.654 07:09:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:40.655 07:09:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:40.655 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:40.655 07:09:01 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:40.655 07:09:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:40.655 07:09:01 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.655 07:09:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:40.655 07:09:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.655 07:09:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:40.655 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:40.655 07:09:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.655 07:09:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.655 07:09:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:40.655 07:09:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.655 07:09:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:40.655 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:40.655 07:09:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.655 07:09:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:40.655 07:09:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:40.655 07:09:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:40.655 07:09:01 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:40.655 07:09:01 -- nvmf/common.sh@57 -- # uname 00:28:40.655 07:09:01 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:40.655 07:09:01 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:40.655 07:09:01 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:40.655 07:09:01 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:40.655 07:09:01 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:40.655 07:09:01 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:40.655 07:09:01 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:40.655 07:09:01 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:40.655 07:09:01 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:40.655 07:09:01 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:40.655 07:09:01 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:40.655 07:09:01 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:40.655 07:09:01 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:40.655 07:09:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:40.655 07:09:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:40.655 07:09:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:40.655 07:09:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:40.655 07:09:01 -- nvmf/common.sh@104 -- # continue 2 00:28:40.655 07:09:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:40.655 07:09:01 -- nvmf/common.sh@104 -- # continue 2 00:28:40.655 07:09:01 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:40.655 07:09:01 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:40.655 07:09:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:40.655 07:09:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:40.655 07:09:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:40.655 07:09:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:40.655 07:09:01 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:40.655 07:09:01 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:40.655 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:40.655 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:40.655 altname enp217s0f0np0 00:28:40.655 altname ens818f0np0 00:28:40.655 inet 192.168.100.8/24 scope global mlx_0_0 00:28:40.655 valid_lft forever preferred_lft forever 00:28:40.655 07:09:01 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:40.655 07:09:01 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:40.655 07:09:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:40.655 07:09:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:40.655 07:09:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:40.655 07:09:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:40.655 07:09:01 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:40.655 07:09:01 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:40.655 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:40.655 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:40.655 altname enp217s0f1np1 00:28:40.655 altname ens818f1np1 00:28:40.655 inet 192.168.100.9/24 scope global mlx_0_1 00:28:40.655 valid_lft forever preferred_lft forever 00:28:40.655 07:09:01 -- nvmf/common.sh@410 -- # return 0 00:28:40.655 07:09:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:40.655 07:09:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:40.655 07:09:01 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:40.655 07:09:01 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:40.655 07:09:01 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:40.655 07:09:01 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:40.655 07:09:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:40.655 07:09:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:40.655 07:09:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:40.655 07:09:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:40.655 07:09:01 -- nvmf/common.sh@104 -- # continue 2 00:28:40.655 07:09:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.655 07:09:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:40.655 07:09:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:40.655 07:09:01 -- nvmf/common.sh@104 -- # continue 2 00:28:40.655 07:09:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:40.655 07:09:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:40.655 07:09:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:40.655 07:09:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:40.655 07:09:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:40.655 07:09:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:40.655 07:09:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:40.655 07:09:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:40.655 07:09:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:40.655 07:09:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:40.655 07:09:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:40.655 07:09:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:40.655 07:09:01 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:40.655 192.168.100.9' 00:28:40.655 07:09:01 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:40.655 192.168.100.9' 00:28:40.655 07:09:01 -- nvmf/common.sh@445 -- # head -n 1 00:28:40.655 07:09:01 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:40.655 07:09:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:40.655 192.168.100.9' 00:28:40.655 07:09:01 -- nvmf/common.sh@446 -- # tail -n +2 00:28:40.655 07:09:01 -- nvmf/common.sh@446 -- # head -n 1 00:28:40.655 07:09:01 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:40.655 07:09:01 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:40.655 07:09:01 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:40.655 07:09:01 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:40.655 07:09:01 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:40.655 07:09:01 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:40.655 07:09:02 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:40.655 07:09:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:40.655 07:09:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:40.655 07:09:02 -- common/autotest_common.sh@10 -- # set +x 00:28:40.655 ************************************ 00:28:40.655 START TEST nvmf_target_disconnect_tc1 00:28:40.655 ************************************ 00:28:40.655 07:09:02 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc1 00:28:40.655 07:09:02 -- host/target_disconnect.sh@32 -- # set +e 00:28:40.655 07:09:02 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:40.655 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.655 [2024-12-15 07:09:02.141016] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:40.655 [2024-12-15 07:09:02.141127] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:40.655 [2024-12-15 07:09:02.141158] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d70c0 00:28:41.590 [2024-12-15 07:09:03.145200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:41.590 [2024-12-15 07:09:03.145262] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:41.590 [2024-12-15 07:09:03.145297] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:28:41.590 [2024-12-15 07:09:03.145353] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:41.590 [2024-12-15 07:09:03.145381] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:41.590 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:28:41.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:41.590 Initializing NVMe Controllers 00:28:41.590 07:09:03 -- host/target_disconnect.sh@33 -- # trap - ERR 00:28:41.590 07:09:03 -- host/target_disconnect.sh@33 -- # print_backtrace 00:28:41.590 07:09:03 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:28:41.590 07:09:03 -- common/autotest_common.sh@1142 -- # return 0 00:28:41.590 07:09:03 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:28:41.590 07:09:03 -- host/target_disconnect.sh@41 -- # set -e 00:28:41.590 00:28:41.590 real 0m1.126s 00:28:41.590 user 0m0.850s 00:28:41.590 sys 0m0.264s 00:28:41.590 07:09:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:41.590 07:09:03 -- common/autotest_common.sh@10 -- # set +x 00:28:41.590 ************************************ 00:28:41.590 END TEST nvmf_target_disconnect_tc1 00:28:41.590 ************************************ 00:28:41.590 07:09:03 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:41.590 07:09:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:41.590 07:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:41.590 07:09:03 -- common/autotest_common.sh@10 -- # set +x 00:28:41.590 ************************************ 00:28:41.590 START TEST nvmf_target_disconnect_tc2 00:28:41.590 ************************************ 00:28:41.590 07:09:03 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc2 00:28:41.590 07:09:03 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:28:41.590 07:09:03 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:41.590 07:09:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:41.590 07:09:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:41.590 07:09:03 -- common/autotest_common.sh@10 -- # set +x 00:28:41.590 07:09:03 -- nvmf/common.sh@469 -- # nvmfpid=1504351 00:28:41.590 07:09:03 -- nvmf/common.sh@470 -- # waitforlisten 1504351 00:28:41.590 07:09:03 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:41.590 07:09:03 -- common/autotest_common.sh@829 -- # '[' -z 1504351 ']' 00:28:41.590 07:09:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.590 07:09:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:41.590 07:09:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.590 07:09:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:41.590 07:09:03 -- common/autotest_common.sh@10 -- # set +x 00:28:41.849 [2024-12-15 07:09:03.261046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:41.849 [2024-12-15 07:09:03.261123] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.849 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.849 [2024-12-15 07:09:03.347720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:41.849 [2024-12-15 07:09:03.384248] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:41.849 [2024-12-15 07:09:03.384377] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.849 [2024-12-15 07:09:03.384387] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.849 [2024-12-15 07:09:03.384396] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.849 [2024-12-15 07:09:03.384831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:41.849 [2024-12-15 07:09:03.384921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:41.849 [2024-12-15 07:09:03.385024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:41.849 [2024-12-15 07:09:03.385025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:42.784 07:09:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:42.784 07:09:04 -- common/autotest_common.sh@862 -- # return 0 00:28:42.784 07:09:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:42.784 07:09:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:42.784 07:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 07:09:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.784 07:09:04 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:42.784 07:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.784 07:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 Malloc0 00:28:42.784 07:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.784 07:09:04 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:42.784 07:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.784 07:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 [2024-12-15 07:09:04.172538] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d29ab0/0x1d35580) succeed. 00:28:42.784 [2024-12-15 07:09:04.181950] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d2b050/0x1d76c20) succeed. 00:28:42.784 07:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.784 07:09:04 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.784 07:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.784 07:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 07:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.784 07:09:04 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:42.784 07:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.784 07:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 07:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.784 07:09:04 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:42.784 07:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.784 07:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 [2024-12-15 07:09:04.326393] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:42.784 07:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.784 07:09:04 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:42.784 07:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.784 07:09:04 -- common/autotest_common.sh@10 -- # set +x 00:28:42.784 07:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.784 07:09:04 -- host/target_disconnect.sh@50 -- # reconnectpid=1504517 00:28:42.784 07:09:04 -- host/target_disconnect.sh@52 -- # sleep 2 00:28:42.784 07:09:04 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:42.784 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.311 07:09:06 -- host/target_disconnect.sh@53 -- # kill -9 1504351 00:28:45.311 07:09:06 -- host/target_disconnect.sh@55 -- # sleep 2 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Read completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 Write completed with error (sct=0, sc=8) 00:28:46.246 starting I/O failed 00:28:46.246 [2024-12-15 07:09:07.532631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.813 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1504351 Killed "${NVMF_APP[@]}" "$@" 00:28:46.813 07:09:08 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:28:46.813 07:09:08 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:46.813 07:09:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:46.813 07:09:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:46.813 07:09:08 -- common/autotest_common.sh@10 -- # set +x 00:28:46.813 07:09:08 -- nvmf/common.sh@469 -- # nvmfpid=1505614 00:28:46.813 07:09:08 -- nvmf/common.sh@470 -- # waitforlisten 1505614 00:28:46.813 07:09:08 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:46.813 07:09:08 -- common/autotest_common.sh@829 -- # '[' -z 1505614 ']' 00:28:46.813 07:09:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.813 07:09:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:46.813 07:09:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.813 07:09:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:46.813 07:09:08 -- common/autotest_common.sh@10 -- # set +x 00:28:46.813 [2024-12-15 07:09:08.405660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:46.813 [2024-12-15 07:09:08.405711] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.813 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.072 [2024-12-15 07:09:08.492116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:47.072 [2024-12-15 07:09:08.529428] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:47.072 [2024-12-15 07:09:08.529556] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.072 [2024-12-15 07:09:08.529571] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.072 [2024-12-15 07:09:08.529580] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.072 [2024-12-15 07:09:08.529719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:47.072 [2024-12-15 07:09:08.529829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:47.072 [2024-12-15 07:09:08.529939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:47.072 [2024-12-15 07:09:08.529940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Read completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 Write completed with error (sct=0, sc=8) 00:28:47.072 starting I/O failed 00:28:47.072 [2024-12-15 07:09:08.537785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:47.639 07:09:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:47.639 07:09:09 -- common/autotest_common.sh@862 -- # return 0 00:28:47.639 07:09:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:47.639 07:09:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:47.639 07:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:47.639 07:09:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.639 07:09:09 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:47.639 07:09:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.639 07:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:47.914 Malloc0 00:28:47.914 07:09:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.914 07:09:09 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:47.914 07:09:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.914 07:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:47.914 [2024-12-15 07:09:09.317800] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d07ab0/0x1d13580) succeed. 00:28:47.914 [2024-12-15 07:09:09.327518] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d09050/0x1d54c20) succeed. 00:28:47.914 07:09:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.914 07:09:09 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.914 07:09:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.914 07:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:47.914 07:09:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.914 07:09:09 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:47.914 07:09:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.914 07:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:47.914 07:09:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.914 07:09:09 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:47.914 07:09:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.914 07:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:47.914 [2024-12-15 07:09:09.470303] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:47.914 07:09:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.914 07:09:09 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:47.914 07:09:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.914 07:09:09 -- common/autotest_common.sh@10 -- # set +x 00:28:47.914 07:09:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.914 07:09:09 -- host/target_disconnect.sh@58 -- # wait 1504517 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Write completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Write completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Write completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Write completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Write completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Write completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Write completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Write completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Write completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Write completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Write completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 Read completed with error (sct=0, sc=8) 00:28:47.914 starting I/O failed 00:28:47.914 [2024-12-15 07:09:09.542852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.173 [2024-12-15 07:09:09.554286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.173 [2024-12-15 07:09:09.554341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.173 [2024-12-15 07:09:09.554364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.173 [2024-12-15 07:09:09.554375] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.173 [2024-12-15 07:09:09.554392] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.173 [2024-12-15 07:09:09.564676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.173 qpair failed and we were unable to recover it. 00:28:48.173 [2024-12-15 07:09:09.574359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.173 [2024-12-15 07:09:09.574397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.173 [2024-12-15 07:09:09.574415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.173 [2024-12-15 07:09:09.574425] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.173 [2024-12-15 07:09:09.574434] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.173 [2024-12-15 07:09:09.584733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.173 qpair failed and we were unable to recover it. 00:28:48.173 [2024-12-15 07:09:09.594444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.173 [2024-12-15 07:09:09.594482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.173 [2024-12-15 07:09:09.594500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.173 [2024-12-15 07:09:09.594509] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.173 [2024-12-15 07:09:09.594518] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.174 [2024-12-15 07:09:09.604885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.174 qpair failed and we were unable to recover it. 00:28:48.174 [2024-12-15 07:09:09.614421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.174 [2024-12-15 07:09:09.614466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.174 [2024-12-15 07:09:09.614482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.174 [2024-12-15 07:09:09.614491] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.174 [2024-12-15 07:09:09.614500] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.174 [2024-12-15 07:09:09.624899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.174 qpair failed and we were unable to recover it. 00:28:48.174 [2024-12-15 07:09:09.634406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.174 [2024-12-15 07:09:09.634447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.174 [2024-12-15 07:09:09.634464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.174 [2024-12-15 07:09:09.634473] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.174 [2024-12-15 07:09:09.634482] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.174 [2024-12-15 07:09:09.645002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.174 qpair failed and we were unable to recover it. 00:28:48.174 [2024-12-15 07:09:09.654641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.174 [2024-12-15 07:09:09.654684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.174 [2024-12-15 07:09:09.654704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.174 [2024-12-15 07:09:09.654714] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.174 [2024-12-15 07:09:09.654722] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.174 [2024-12-15 07:09:09.664992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.174 qpair failed and we were unable to recover it. 00:28:48.174 [2024-12-15 07:09:09.674651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.174 [2024-12-15 07:09:09.674692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.174 [2024-12-15 07:09:09.674710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.174 [2024-12-15 07:09:09.674719] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.174 [2024-12-15 07:09:09.674728] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.174 [2024-12-15 07:09:09.685079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.174 qpair failed and we were unable to recover it. 00:28:48.174 [2024-12-15 07:09:09.694740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.174 [2024-12-15 07:09:09.694782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.174 [2024-12-15 07:09:09.694799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.174 [2024-12-15 07:09:09.694808] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.174 [2024-12-15 07:09:09.694816] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.174 [2024-12-15 07:09:09.705081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.174 qpair failed and we were unable to recover it. 00:28:48.174 [2024-12-15 07:09:09.714763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.174 [2024-12-15 07:09:09.714807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.174 [2024-12-15 07:09:09.714823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.174 [2024-12-15 07:09:09.714832] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.174 [2024-12-15 07:09:09.714840] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.174 [2024-12-15 07:09:09.725084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.174 qpair failed and we were unable to recover it. 00:28:48.174 [2024-12-15 07:09:09.734863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.174 [2024-12-15 07:09:09.734898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.174 [2024-12-15 07:09:09.734916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.174 [2024-12-15 07:09:09.734925] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.174 [2024-12-15 07:09:09.734936] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.174 [2024-12-15 07:09:09.745031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.174 qpair failed and we were unable to recover it. 00:28:48.174 [2024-12-15 07:09:09.754898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.174 [2024-12-15 07:09:09.754942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.174 [2024-12-15 07:09:09.754959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.174 [2024-12-15 07:09:09.754968] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.174 [2024-12-15 07:09:09.754983] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.174 [2024-12-15 07:09:09.765236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.174 qpair failed and we were unable to recover it. 00:28:48.174 [2024-12-15 07:09:09.774917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.174 [2024-12-15 07:09:09.774959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.174 [2024-12-15 07:09:09.774989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.174 [2024-12-15 07:09:09.774998] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.174 [2024-12-15 07:09:09.775007] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.174 [2024-12-15 07:09:09.785336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.174 qpair failed and we were unable to recover it. 00:28:48.174 [2024-12-15 07:09:09.795008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.174 [2024-12-15 07:09:09.795055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.174 [2024-12-15 07:09:09.795072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.174 [2024-12-15 07:09:09.795081] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.174 [2024-12-15 07:09:09.795089] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.174 [2024-12-15 07:09:09.805432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.174 qpair failed and we were unable to recover it. 00:28:48.434 [2024-12-15 07:09:09.815092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.434 [2024-12-15 07:09:09.815131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.434 [2024-12-15 07:09:09.815150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.434 [2024-12-15 07:09:09.815160] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.434 [2024-12-15 07:09:09.815170] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.434 [2024-12-15 07:09:09.825410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.434 qpair failed and we were unable to recover it. 00:28:48.434 [2024-12-15 07:09:09.835073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.434 [2024-12-15 07:09:09.835106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.434 [2024-12-15 07:09:09.835123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.434 [2024-12-15 07:09:09.835131] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.434 [2024-12-15 07:09:09.835140] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.434 [2024-12-15 07:09:09.845411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.434 qpair failed and we were unable to recover it. 00:28:48.434 [2024-12-15 07:09:09.855206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.434 [2024-12-15 07:09:09.855247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.434 [2024-12-15 07:09:09.855264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.434 [2024-12-15 07:09:09.855273] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.434 [2024-12-15 07:09:09.855281] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.434 [2024-12-15 07:09:09.865602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.434 qpair failed and we were unable to recover it. 00:28:48.434 [2024-12-15 07:09:09.875187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.434 [2024-12-15 07:09:09.875230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.434 [2024-12-15 07:09:09.875247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.434 [2024-12-15 07:09:09.875256] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.434 [2024-12-15 07:09:09.875264] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.434 [2024-12-15 07:09:09.885620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.434 qpair failed and we were unable to recover it. 00:28:48.434 [2024-12-15 07:09:09.895312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.434 [2024-12-15 07:09:09.895355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.434 [2024-12-15 07:09:09.895371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.434 [2024-12-15 07:09:09.895380] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.434 [2024-12-15 07:09:09.895388] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.434 [2024-12-15 07:09:09.905673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.434 qpair failed and we were unable to recover it. 00:28:48.434 [2024-12-15 07:09:09.915431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.434 [2024-12-15 07:09:09.915472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.434 [2024-12-15 07:09:09.915488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.434 [2024-12-15 07:09:09.915500] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.434 [2024-12-15 07:09:09.915509] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.434 [2024-12-15 07:09:09.925748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.434 qpair failed and we were unable to recover it. 00:28:48.434 [2024-12-15 07:09:09.935403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.434 [2024-12-15 07:09:09.935443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.434 [2024-12-15 07:09:09.935460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.434 [2024-12-15 07:09:09.935468] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.434 [2024-12-15 07:09:09.935477] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.434 [2024-12-15 07:09:09.945814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.434 qpair failed and we were unable to recover it. 00:28:48.434 [2024-12-15 07:09:09.955414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.434 [2024-12-15 07:09:09.955456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.434 [2024-12-15 07:09:09.955473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.434 [2024-12-15 07:09:09.955482] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.434 [2024-12-15 07:09:09.955490] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.434 [2024-12-15 07:09:09.966071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.434 qpair failed and we were unable to recover it. 00:28:48.434 [2024-12-15 07:09:09.975567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.434 [2024-12-15 07:09:09.975603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.434 [2024-12-15 07:09:09.975621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.434 [2024-12-15 07:09:09.975630] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.434 [2024-12-15 07:09:09.975638] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.434 [2024-12-15 07:09:09.985988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.434 qpair failed and we were unable to recover it. 00:28:48.434 [2024-12-15 07:09:09.995542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.434 [2024-12-15 07:09:09.995577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.434 [2024-12-15 07:09:09.995594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.434 [2024-12-15 07:09:09.995602] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.434 [2024-12-15 07:09:09.995611] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.434 [2024-12-15 07:09:10.005996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.434 qpair failed and we were unable to recover it. 00:28:48.434 [2024-12-15 07:09:10.016213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.434 [2024-12-15 07:09:10.016261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.434 [2024-12-15 07:09:10.016290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.434 [2024-12-15 07:09:10.016299] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.434 [2024-12-15 07:09:10.016308] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.434 [2024-12-15 07:09:10.026001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.434 qpair failed and we were unable to recover it. 00:28:48.434 [2024-12-15 07:09:10.035747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.434 [2024-12-15 07:09:10.035794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.434 [2024-12-15 07:09:10.035810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.435 [2024-12-15 07:09:10.035820] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.435 [2024-12-15 07:09:10.035829] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.435 [2024-12-15 07:09:10.046118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.435 qpair failed and we were unable to recover it. 00:28:48.435 [2024-12-15 07:09:10.055732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.435 [2024-12-15 07:09:10.055781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.435 [2024-12-15 07:09:10.055798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.435 [2024-12-15 07:09:10.055807] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.435 [2024-12-15 07:09:10.055815] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.435 [2024-12-15 07:09:10.066278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.435 qpair failed and we were unable to recover it. 00:28:48.694 [2024-12-15 07:09:10.075885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.694 [2024-12-15 07:09:10.075923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.694 [2024-12-15 07:09:10.075940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.694 [2024-12-15 07:09:10.075949] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.694 [2024-12-15 07:09:10.075958] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.694 [2024-12-15 07:09:10.086710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.694 qpair failed and we were unable to recover it. 00:28:48.694 [2024-12-15 07:09:10.095803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.694 [2024-12-15 07:09:10.095849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.694 [2024-12-15 07:09:10.095870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.694 [2024-12-15 07:09:10.095880] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.694 [2024-12-15 07:09:10.095889] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.694 [2024-12-15 07:09:10.106290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.694 qpair failed and we were unable to recover it. 00:28:48.694 [2024-12-15 07:09:10.115950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.694 [2024-12-15 07:09:10.115998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.694 [2024-12-15 07:09:10.116015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.694 [2024-12-15 07:09:10.116024] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.694 [2024-12-15 07:09:10.116033] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.694 [2024-12-15 07:09:10.126413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.694 qpair failed and we were unable to recover it. 00:28:48.694 [2024-12-15 07:09:10.135947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.694 [2024-12-15 07:09:10.135995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.694 [2024-12-15 07:09:10.136012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.694 [2024-12-15 07:09:10.136021] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.694 [2024-12-15 07:09:10.136029] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.694 [2024-12-15 07:09:10.146395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.694 qpair failed and we were unable to recover it. 00:28:48.694 [2024-12-15 07:09:10.156061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.694 [2024-12-15 07:09:10.156098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.694 [2024-12-15 07:09:10.156115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.694 [2024-12-15 07:09:10.156124] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.694 [2024-12-15 07:09:10.156132] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.694 [2024-12-15 07:09:10.166531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.694 qpair failed and we were unable to recover it. 00:28:48.694 [2024-12-15 07:09:10.176145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.694 [2024-12-15 07:09:10.176185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.694 [2024-12-15 07:09:10.176202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.694 [2024-12-15 07:09:10.176211] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.694 [2024-12-15 07:09:10.176222] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.694 [2024-12-15 07:09:10.186542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.694 qpair failed and we were unable to recover it. 00:28:48.694 [2024-12-15 07:09:10.196130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.694 [2024-12-15 07:09:10.196168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.694 [2024-12-15 07:09:10.196184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.694 [2024-12-15 07:09:10.196193] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.694 [2024-12-15 07:09:10.196201] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.694 [2024-12-15 07:09:10.206671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.694 qpair failed and we were unable to recover it. 00:28:48.694 [2024-12-15 07:09:10.216322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.694 [2024-12-15 07:09:10.216360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.694 [2024-12-15 07:09:10.216377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.694 [2024-12-15 07:09:10.216387] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.694 [2024-12-15 07:09:10.216395] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.694 [2024-12-15 07:09:10.226727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.694 qpair failed and we were unable to recover it. 00:28:48.694 [2024-12-15 07:09:10.236274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.694 [2024-12-15 07:09:10.236314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.694 [2024-12-15 07:09:10.236329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.694 [2024-12-15 07:09:10.236338] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.694 [2024-12-15 07:09:10.236347] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.694 [2024-12-15 07:09:10.246909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.694 qpair failed and we were unable to recover it. 00:28:48.695 [2024-12-15 07:09:10.256349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.695 [2024-12-15 07:09:10.256387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.695 [2024-12-15 07:09:10.256403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.695 [2024-12-15 07:09:10.256412] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.695 [2024-12-15 07:09:10.256420] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.695 [2024-12-15 07:09:10.266860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.695 qpair failed and we were unable to recover it. 00:28:48.695 [2024-12-15 07:09:10.276438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.695 [2024-12-15 07:09:10.276484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.695 [2024-12-15 07:09:10.276501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.695 [2024-12-15 07:09:10.276510] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.695 [2024-12-15 07:09:10.276518] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.695 [2024-12-15 07:09:10.286846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.695 qpair failed and we were unable to recover it. 00:28:48.695 [2024-12-15 07:09:10.296413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.695 [2024-12-15 07:09:10.296447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.695 [2024-12-15 07:09:10.296463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.695 [2024-12-15 07:09:10.296472] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.695 [2024-12-15 07:09:10.296480] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.695 [2024-12-15 07:09:10.306763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.695 qpair failed and we were unable to recover it. 00:28:48.695 [2024-12-15 07:09:10.316577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.695 [2024-12-15 07:09:10.316620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.695 [2024-12-15 07:09:10.316637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.695 [2024-12-15 07:09:10.316646] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.695 [2024-12-15 07:09:10.316654] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.695 [2024-12-15 07:09:10.327014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.695 qpair failed and we were unable to recover it. 00:28:48.954 [2024-12-15 07:09:10.336613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.954 [2024-12-15 07:09:10.336654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.954 [2024-12-15 07:09:10.336670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.954 [2024-12-15 07:09:10.336679] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.954 [2024-12-15 07:09:10.336688] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.954 [2024-12-15 07:09:10.347024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.954 qpair failed and we were unable to recover it. 00:28:48.954 [2024-12-15 07:09:10.356650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.954 [2024-12-15 07:09:10.356699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.954 [2024-12-15 07:09:10.356716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.954 [2024-12-15 07:09:10.356728] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.954 [2024-12-15 07:09:10.356736] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.954 [2024-12-15 07:09:10.367226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.954 qpair failed and we were unable to recover it. 00:28:48.954 [2024-12-15 07:09:10.376730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.954 [2024-12-15 07:09:10.376772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.954 [2024-12-15 07:09:10.376788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.954 [2024-12-15 07:09:10.376797] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.954 [2024-12-15 07:09:10.376806] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.954 [2024-12-15 07:09:10.386974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.954 qpair failed and we were unable to recover it. 00:28:48.954 [2024-12-15 07:09:10.396837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.954 [2024-12-15 07:09:10.396874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.954 [2024-12-15 07:09:10.396891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.954 [2024-12-15 07:09:10.396900] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.954 [2024-12-15 07:09:10.396909] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.954 [2024-12-15 07:09:10.407193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.954 qpair failed and we were unable to recover it. 00:28:48.954 [2024-12-15 07:09:10.416811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.954 [2024-12-15 07:09:10.416852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.954 [2024-12-15 07:09:10.416869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.954 [2024-12-15 07:09:10.416878] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.954 [2024-12-15 07:09:10.416886] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.954 [2024-12-15 07:09:10.427173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.954 qpair failed and we were unable to recover it. 00:28:48.954 [2024-12-15 07:09:10.436851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.954 [2024-12-15 07:09:10.436888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.954 [2024-12-15 07:09:10.436905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.954 [2024-12-15 07:09:10.436914] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.954 [2024-12-15 07:09:10.436922] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.954 [2024-12-15 07:09:10.447163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.954 qpair failed and we were unable to recover it. 00:28:48.954 [2024-12-15 07:09:10.456963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.954 [2024-12-15 07:09:10.457004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.954 [2024-12-15 07:09:10.457021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.954 [2024-12-15 07:09:10.457030] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.954 [2024-12-15 07:09:10.457038] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.954 [2024-12-15 07:09:10.467320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.954 qpair failed and we were unable to recover it. 00:28:48.954 [2024-12-15 07:09:10.476922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.954 [2024-12-15 07:09:10.476960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.954 [2024-12-15 07:09:10.476982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.954 [2024-12-15 07:09:10.476992] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.954 [2024-12-15 07:09:10.477000] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.954 [2024-12-15 07:09:10.487422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.955 qpair failed and we were unable to recover it. 00:28:48.955 [2024-12-15 07:09:10.496954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.955 [2024-12-15 07:09:10.497003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.955 [2024-12-15 07:09:10.497020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.955 [2024-12-15 07:09:10.497029] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.955 [2024-12-15 07:09:10.497038] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.955 [2024-12-15 07:09:10.507181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.955 qpair failed and we were unable to recover it. 00:28:48.955 [2024-12-15 07:09:10.517105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.955 [2024-12-15 07:09:10.517144] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.955 [2024-12-15 07:09:10.517162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.955 [2024-12-15 07:09:10.517171] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.955 [2024-12-15 07:09:10.517179] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.955 [2024-12-15 07:09:10.527416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.955 qpair failed and we were unable to recover it. 00:28:48.955 [2024-12-15 07:09:10.537178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.955 [2024-12-15 07:09:10.537218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.955 [2024-12-15 07:09:10.537240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.955 [2024-12-15 07:09:10.537250] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.955 [2024-12-15 07:09:10.537258] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.955 [2024-12-15 07:09:10.547571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.955 qpair failed and we were unable to recover it. 00:28:48.955 [2024-12-15 07:09:10.557244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.955 [2024-12-15 07:09:10.557280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.955 [2024-12-15 07:09:10.557297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.955 [2024-12-15 07:09:10.557306] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.955 [2024-12-15 07:09:10.557314] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.955 [2024-12-15 07:09:10.567609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.955 qpair failed and we were unable to recover it. 00:28:48.955 [2024-12-15 07:09:10.577284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.955 [2024-12-15 07:09:10.577325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.955 [2024-12-15 07:09:10.577342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.955 [2024-12-15 07:09:10.577350] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.955 [2024-12-15 07:09:10.577359] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:48.955 [2024-12-15 07:09:10.587652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.955 qpair failed and we were unable to recover it. 00:28:49.214 [2024-12-15 07:09:10.597320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.214 [2024-12-15 07:09:10.597356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.214 [2024-12-15 07:09:10.597373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.214 [2024-12-15 07:09:10.597382] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.214 [2024-12-15 07:09:10.597390] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.214 [2024-12-15 07:09:10.607689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.214 qpair failed and we were unable to recover it. 00:28:49.214 [2024-12-15 07:09:10.617368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.214 [2024-12-15 07:09:10.617405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.214 [2024-12-15 07:09:10.617421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.214 [2024-12-15 07:09:10.617430] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.214 [2024-12-15 07:09:10.617438] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.214 [2024-12-15 07:09:10.627781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.214 qpair failed and we were unable to recover it. 00:28:49.214 [2024-12-15 07:09:10.637452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.214 [2024-12-15 07:09:10.637487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.214 [2024-12-15 07:09:10.637504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.214 [2024-12-15 07:09:10.637513] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.214 [2024-12-15 07:09:10.637521] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.214 [2024-12-15 07:09:10.648072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.214 qpair failed and we were unable to recover it. 00:28:49.214 [2024-12-15 07:09:10.657445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.214 [2024-12-15 07:09:10.657487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.214 [2024-12-15 07:09:10.657504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.214 [2024-12-15 07:09:10.657513] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.214 [2024-12-15 07:09:10.657522] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.214 [2024-12-15 07:09:10.667666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.214 qpair failed and we were unable to recover it. 00:28:49.214 [2024-12-15 07:09:10.677553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.214 [2024-12-15 07:09:10.677595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.214 [2024-12-15 07:09:10.677612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.214 [2024-12-15 07:09:10.677622] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.214 [2024-12-15 07:09:10.677631] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.214 [2024-12-15 07:09:10.688345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.214 qpair failed and we were unable to recover it. 00:28:49.214 [2024-12-15 07:09:10.697595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.214 [2024-12-15 07:09:10.697640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.214 [2024-12-15 07:09:10.697657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.214 [2024-12-15 07:09:10.697666] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.214 [2024-12-15 07:09:10.697674] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.214 [2024-12-15 07:09:10.707990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.214 qpair failed and we were unable to recover it. 00:28:49.214 [2024-12-15 07:09:10.717659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.214 [2024-12-15 07:09:10.717700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.214 [2024-12-15 07:09:10.717717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.214 [2024-12-15 07:09:10.717726] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.214 [2024-12-15 07:09:10.717734] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.214 [2024-12-15 07:09:10.727804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.214 qpair failed and we were unable to recover it. 00:28:49.214 [2024-12-15 07:09:10.737700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.214 [2024-12-15 07:09:10.737740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.214 [2024-12-15 07:09:10.737756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.214 [2024-12-15 07:09:10.737765] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.214 [2024-12-15 07:09:10.737774] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.214 [2024-12-15 07:09:10.748094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.214 qpair failed and we were unable to recover it. 00:28:49.214 [2024-12-15 07:09:10.757734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.214 [2024-12-15 07:09:10.757778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.214 [2024-12-15 07:09:10.757795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.214 [2024-12-15 07:09:10.757804] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.214 [2024-12-15 07:09:10.757812] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.215 [2024-12-15 07:09:10.768276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.215 qpair failed and we were unable to recover it. 00:28:49.215 [2024-12-15 07:09:10.777872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.215 [2024-12-15 07:09:10.777921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.215 [2024-12-15 07:09:10.777938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.215 [2024-12-15 07:09:10.777946] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.215 [2024-12-15 07:09:10.777955] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.215 [2024-12-15 07:09:10.788285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.215 qpair failed and we were unable to recover it. 00:28:49.215 [2024-12-15 07:09:10.797807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.215 [2024-12-15 07:09:10.797840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.215 [2024-12-15 07:09:10.797856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.215 [2024-12-15 07:09:10.797868] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.215 [2024-12-15 07:09:10.797877] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.215 [2024-12-15 07:09:10.808218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.215 qpair failed and we were unable to recover it. 00:28:49.215 [2024-12-15 07:09:10.817920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.215 [2024-12-15 07:09:10.817960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.215 [2024-12-15 07:09:10.817986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.215 [2024-12-15 07:09:10.817995] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.215 [2024-12-15 07:09:10.818004] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.215 [2024-12-15 07:09:10.828437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.215 qpair failed and we were unable to recover it. 00:28:49.215 [2024-12-15 07:09:10.837959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.215 [2024-12-15 07:09:10.838011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.215 [2024-12-15 07:09:10.838028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.215 [2024-12-15 07:09:10.838036] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.215 [2024-12-15 07:09:10.838045] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.215 [2024-12-15 07:09:10.848339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.215 qpair failed and we were unable to recover it. 00:28:49.474 [2024-12-15 07:09:10.858024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.474 [2024-12-15 07:09:10.858061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.474 [2024-12-15 07:09:10.858078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.474 [2024-12-15 07:09:10.858086] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.474 [2024-12-15 07:09:10.858095] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.474 [2024-12-15 07:09:10.868268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.474 qpair failed and we were unable to recover it. 00:28:49.474 [2024-12-15 07:09:10.878105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.474 [2024-12-15 07:09:10.878138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.474 [2024-12-15 07:09:10.878155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.474 [2024-12-15 07:09:10.878164] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.474 [2024-12-15 07:09:10.878172] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.474 [2024-12-15 07:09:10.888447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.474 qpair failed and we were unable to recover it. 00:28:49.474 [2024-12-15 07:09:10.898232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.474 [2024-12-15 07:09:10.898273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.474 [2024-12-15 07:09:10.898290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.474 [2024-12-15 07:09:10.898299] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.474 [2024-12-15 07:09:10.898307] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.474 [2024-12-15 07:09:10.908411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.474 qpair failed and we were unable to recover it. 00:28:49.474 [2024-12-15 07:09:10.918241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.474 [2024-12-15 07:09:10.918287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.474 [2024-12-15 07:09:10.918303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.474 [2024-12-15 07:09:10.918312] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.474 [2024-12-15 07:09:10.918320] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.474 [2024-12-15 07:09:10.928713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.474 qpair failed and we were unable to recover it. 00:28:49.474 [2024-12-15 07:09:10.938340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.474 [2024-12-15 07:09:10.938381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.474 [2024-12-15 07:09:10.938397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.474 [2024-12-15 07:09:10.938406] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.474 [2024-12-15 07:09:10.938414] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.474 [2024-12-15 07:09:10.948766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.474 qpair failed and we were unable to recover it. 00:28:49.474 [2024-12-15 07:09:10.958182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.474 [2024-12-15 07:09:10.958221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.474 [2024-12-15 07:09:10.958237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.474 [2024-12-15 07:09:10.958246] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.474 [2024-12-15 07:09:10.958254] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.474 [2024-12-15 07:09:10.968853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.474 qpair failed and we were unable to recover it. 00:28:49.474 [2024-12-15 07:09:10.978538] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.474 [2024-12-15 07:09:10.978582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.474 [2024-12-15 07:09:10.978601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.474 [2024-12-15 07:09:10.978610] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.474 [2024-12-15 07:09:10.978619] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.474 [2024-12-15 07:09:10.988831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.474 qpair failed and we were unable to recover it. 00:28:49.474 [2024-12-15 07:09:10.998515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.474 [2024-12-15 07:09:10.998556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.474 [2024-12-15 07:09:10.998572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.474 [2024-12-15 07:09:10.998581] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.475 [2024-12-15 07:09:10.998589] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.475 [2024-12-15 07:09:11.008873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.475 qpair failed and we were unable to recover it. 00:28:49.475 [2024-12-15 07:09:11.018514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.475 [2024-12-15 07:09:11.018553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.475 [2024-12-15 07:09:11.018570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.475 [2024-12-15 07:09:11.018579] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.475 [2024-12-15 07:09:11.018587] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.475 [2024-12-15 07:09:11.028793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.475 qpair failed and we were unable to recover it. 00:28:49.475 [2024-12-15 07:09:11.038627] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.475 [2024-12-15 07:09:11.038663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.475 [2024-12-15 07:09:11.038679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.475 [2024-12-15 07:09:11.038688] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.475 [2024-12-15 07:09:11.038696] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.475 [2024-12-15 07:09:11.049123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.475 qpair failed and we were unable to recover it. 00:28:49.475 [2024-12-15 07:09:11.058640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.475 [2024-12-15 07:09:11.058681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.475 [2024-12-15 07:09:11.058697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.475 [2024-12-15 07:09:11.058706] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.475 [2024-12-15 07:09:11.058715] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.475 [2024-12-15 07:09:11.069132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.475 qpair failed and we were unable to recover it. 00:28:49.475 [2024-12-15 07:09:11.078788] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.475 [2024-12-15 07:09:11.078832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.475 [2024-12-15 07:09:11.078848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.475 [2024-12-15 07:09:11.078857] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.475 [2024-12-15 07:09:11.078865] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.475 [2024-12-15 07:09:11.089199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.475 qpair failed and we were unable to recover it. 00:28:49.475 [2024-12-15 07:09:11.098755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.475 [2024-12-15 07:09:11.098796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.475 [2024-12-15 07:09:11.098812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.475 [2024-12-15 07:09:11.098821] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.475 [2024-12-15 07:09:11.098830] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.475 [2024-12-15 07:09:11.109162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.475 qpair failed and we were unable to recover it. 00:28:49.734 [2024-12-15 07:09:11.118822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.734 [2024-12-15 07:09:11.118861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.734 [2024-12-15 07:09:11.118877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.734 [2024-12-15 07:09:11.118886] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.734 [2024-12-15 07:09:11.118895] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.734 [2024-12-15 07:09:11.129305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.734 qpair failed and we were unable to recover it. 00:28:49.734 [2024-12-15 07:09:11.138823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.734 [2024-12-15 07:09:11.138865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.734 [2024-12-15 07:09:11.138881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.734 [2024-12-15 07:09:11.138890] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.734 [2024-12-15 07:09:11.138899] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.734 [2024-12-15 07:09:11.149259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.734 qpair failed and we were unable to recover it. 00:28:49.734 [2024-12-15 07:09:11.158954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.734 [2024-12-15 07:09:11.159001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.734 [2024-12-15 07:09:11.159021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.734 [2024-12-15 07:09:11.159030] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.734 [2024-12-15 07:09:11.159039] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.734 [2024-12-15 07:09:11.169308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.734 qpair failed and we were unable to recover it. 00:28:49.734 [2024-12-15 07:09:11.179079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.734 [2024-12-15 07:09:11.179115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.734 [2024-12-15 07:09:11.179132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.734 [2024-12-15 07:09:11.179142] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.734 [2024-12-15 07:09:11.179150] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.734 [2024-12-15 07:09:11.189470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.734 qpair failed and we were unable to recover it. 00:28:49.734 [2024-12-15 07:09:11.199020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.734 [2024-12-15 07:09:11.199062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.734 [2024-12-15 07:09:11.199079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.734 [2024-12-15 07:09:11.199088] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.734 [2024-12-15 07:09:11.199097] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.734 [2024-12-15 07:09:11.209629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.734 qpair failed and we were unable to recover it. 00:28:49.734 [2024-12-15 07:09:11.219137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.734 [2024-12-15 07:09:11.219179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.734 [2024-12-15 07:09:11.219196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.734 [2024-12-15 07:09:11.219204] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.734 [2024-12-15 07:09:11.219213] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.734 [2024-12-15 07:09:11.229493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.734 qpair failed and we were unable to recover it. 00:28:49.734 [2024-12-15 07:09:11.239178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.734 [2024-12-15 07:09:11.239216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.734 [2024-12-15 07:09:11.239233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.734 [2024-12-15 07:09:11.239242] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.734 [2024-12-15 07:09:11.239254] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.734 [2024-12-15 07:09:11.249570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.734 qpair failed and we were unable to recover it. 00:28:49.734 [2024-12-15 07:09:11.259262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.734 [2024-12-15 07:09:11.259302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.734 [2024-12-15 07:09:11.259319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.734 [2024-12-15 07:09:11.259328] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.734 [2024-12-15 07:09:11.259336] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.734 [2024-12-15 07:09:11.269512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.734 qpair failed and we were unable to recover it. 00:28:49.734 [2024-12-15 07:09:11.279307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.734 [2024-12-15 07:09:11.279349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.734 [2024-12-15 07:09:11.279367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.734 [2024-12-15 07:09:11.279376] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.734 [2024-12-15 07:09:11.279385] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.734 [2024-12-15 07:09:11.289705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.734 qpair failed and we were unable to recover it. 00:28:49.734 [2024-12-15 07:09:11.299361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.734 [2024-12-15 07:09:11.299403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.734 [2024-12-15 07:09:11.299420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.734 [2024-12-15 07:09:11.299429] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.734 [2024-12-15 07:09:11.299438] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.734 [2024-12-15 07:09:11.309800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.734 qpair failed and we were unable to recover it. 00:28:49.734 [2024-12-15 07:09:11.319474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.734 [2024-12-15 07:09:11.319510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.735 [2024-12-15 07:09:11.319527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.735 [2024-12-15 07:09:11.319536] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.735 [2024-12-15 07:09:11.319544] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.735 [2024-12-15 07:09:11.330183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.735 qpair failed and we were unable to recover it. 00:28:49.735 [2024-12-15 07:09:11.339589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.735 [2024-12-15 07:09:11.339623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.735 [2024-12-15 07:09:11.339640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.735 [2024-12-15 07:09:11.339649] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.735 [2024-12-15 07:09:11.339657] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.735 [2024-12-15 07:09:11.349896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.735 qpair failed and we were unable to recover it. 00:28:49.735 [2024-12-15 07:09:11.359681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.735 [2024-12-15 07:09:11.359725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.735 [2024-12-15 07:09:11.359742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.735 [2024-12-15 07:09:11.359751] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.735 [2024-12-15 07:09:11.359759] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.735 [2024-12-15 07:09:11.369988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.735 qpair failed and we were unable to recover it. 00:28:49.994 [2024-12-15 07:09:11.379602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.994 [2024-12-15 07:09:11.379641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.994 [2024-12-15 07:09:11.379659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.994 [2024-12-15 07:09:11.379669] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.994 [2024-12-15 07:09:11.379678] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.994 [2024-12-15 07:09:11.390042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.994 qpair failed and we were unable to recover it. 00:28:49.994 [2024-12-15 07:09:11.399789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.994 [2024-12-15 07:09:11.399833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.994 [2024-12-15 07:09:11.399849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.994 [2024-12-15 07:09:11.399859] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.994 [2024-12-15 07:09:11.399867] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.994 [2024-12-15 07:09:11.409940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.994 qpair failed and we were unable to recover it. 00:28:49.994 [2024-12-15 07:09:11.419818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.994 [2024-12-15 07:09:11.419855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.994 [2024-12-15 07:09:11.419871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.994 [2024-12-15 07:09:11.419882] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.994 [2024-12-15 07:09:11.419891] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.994 [2024-12-15 07:09:11.430008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.994 qpair failed and we were unable to recover it. 00:28:49.994 [2024-12-15 07:09:11.439899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.994 [2024-12-15 07:09:11.439937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.994 [2024-12-15 07:09:11.439953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.994 [2024-12-15 07:09:11.439962] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.995 [2024-12-15 07:09:11.439970] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.995 [2024-12-15 07:09:11.450166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.995 qpair failed and we were unable to recover it. 00:28:49.995 [2024-12-15 07:09:11.459955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.995 [2024-12-15 07:09:11.460000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.995 [2024-12-15 07:09:11.460017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.995 [2024-12-15 07:09:11.460025] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.995 [2024-12-15 07:09:11.460034] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.995 [2024-12-15 07:09:11.470267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.995 qpair failed and we were unable to recover it. 00:28:49.995 [2024-12-15 07:09:11.480110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.995 [2024-12-15 07:09:11.480148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.995 [2024-12-15 07:09:11.480164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.995 [2024-12-15 07:09:11.480173] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.995 [2024-12-15 07:09:11.480181] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.995 [2024-12-15 07:09:11.490194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.995 qpair failed and we were unable to recover it. 00:28:49.995 [2024-12-15 07:09:11.500125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.995 [2024-12-15 07:09:11.500166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.995 [2024-12-15 07:09:11.500183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.995 [2024-12-15 07:09:11.500192] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.995 [2024-12-15 07:09:11.500200] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.995 [2024-12-15 07:09:11.510246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.995 qpair failed and we were unable to recover it. 00:28:49.995 [2024-12-15 07:09:11.520125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.995 [2024-12-15 07:09:11.520159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.995 [2024-12-15 07:09:11.520175] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.995 [2024-12-15 07:09:11.520184] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.995 [2024-12-15 07:09:11.520192] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.995 [2024-12-15 07:09:11.530710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.995 qpair failed and we were unable to recover it. 00:28:49.995 [2024-12-15 07:09:11.540175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.995 [2024-12-15 07:09:11.540217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.995 [2024-12-15 07:09:11.540233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.995 [2024-12-15 07:09:11.540242] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.995 [2024-12-15 07:09:11.540250] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.995 [2024-12-15 07:09:11.550529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.995 qpair failed and we were unable to recover it. 00:28:49.995 [2024-12-15 07:09:11.560297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.995 [2024-12-15 07:09:11.560340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.995 [2024-12-15 07:09:11.560356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.995 [2024-12-15 07:09:11.560365] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.995 [2024-12-15 07:09:11.560373] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.995 [2024-12-15 07:09:11.570735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.995 qpair failed and we were unable to recover it. 00:28:49.995 [2024-12-15 07:09:11.580362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.995 [2024-12-15 07:09:11.580400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.995 [2024-12-15 07:09:11.580416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.995 [2024-12-15 07:09:11.580425] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.995 [2024-12-15 07:09:11.580433] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.995 [2024-12-15 07:09:11.590665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.995 qpair failed and we were unable to recover it. 00:28:49.995 [2024-12-15 07:09:11.600397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.995 [2024-12-15 07:09:11.600437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.995 [2024-12-15 07:09:11.600456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.995 [2024-12-15 07:09:11.600465] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.995 [2024-12-15 07:09:11.600474] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.995 [2024-12-15 07:09:11.610857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.995 qpair failed and we were unable to recover it. 00:28:49.995 [2024-12-15 07:09:11.620478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.995 [2024-12-15 07:09:11.620518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.995 [2024-12-15 07:09:11.620534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.995 [2024-12-15 07:09:11.620543] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.995 [2024-12-15 07:09:11.620551] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:49.995 [2024-12-15 07:09:11.630990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.995 qpair failed and we were unable to recover it. 00:28:50.254 [2024-12-15 07:09:11.640557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.255 [2024-12-15 07:09:11.640598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.255 [2024-12-15 07:09:11.640614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.255 [2024-12-15 07:09:11.640623] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.255 [2024-12-15 07:09:11.640631] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.255 [2024-12-15 07:09:11.651053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.255 qpair failed and we were unable to recover it. 00:28:50.255 [2024-12-15 07:09:11.660526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.255 [2024-12-15 07:09:11.660560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.255 [2024-12-15 07:09:11.660577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.255 [2024-12-15 07:09:11.660586] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.255 [2024-12-15 07:09:11.660594] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.255 [2024-12-15 07:09:11.671046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.255 qpair failed and we were unable to recover it. 00:28:50.255 [2024-12-15 07:09:11.680588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.255 [2024-12-15 07:09:11.680622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.255 [2024-12-15 07:09:11.680637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.255 [2024-12-15 07:09:11.680646] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.255 [2024-12-15 07:09:11.680658] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.255 [2024-12-15 07:09:11.691137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.255 qpair failed and we were unable to recover it. 00:28:50.255 [2024-12-15 07:09:11.700686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.255 [2024-12-15 07:09:11.700726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.255 [2024-12-15 07:09:11.700742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.255 [2024-12-15 07:09:11.700751] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.255 [2024-12-15 07:09:11.700759] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.255 [2024-12-15 07:09:11.711032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.255 qpair failed and we were unable to recover it. 00:28:50.255 [2024-12-15 07:09:11.720707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.255 [2024-12-15 07:09:11.720753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.255 [2024-12-15 07:09:11.720770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.255 [2024-12-15 07:09:11.720779] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.255 [2024-12-15 07:09:11.720787] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.255 [2024-12-15 07:09:11.731134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.255 qpair failed and we were unable to recover it. 00:28:50.255 [2024-12-15 07:09:11.740906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.255 [2024-12-15 07:09:11.740951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.255 [2024-12-15 07:09:11.740969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.255 [2024-12-15 07:09:11.740983] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.255 [2024-12-15 07:09:11.740992] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.255 [2024-12-15 07:09:11.751214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.255 qpair failed and we were unable to recover it. 00:28:50.255 [2024-12-15 07:09:11.760847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.255 [2024-12-15 07:09:11.760883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.255 [2024-12-15 07:09:11.760900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.255 [2024-12-15 07:09:11.760909] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.255 [2024-12-15 07:09:11.760917] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.255 [2024-12-15 07:09:11.771237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.255 qpair failed and we were unable to recover it. 00:28:50.255 [2024-12-15 07:09:11.780822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.255 [2024-12-15 07:09:11.780862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.255 [2024-12-15 07:09:11.780879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.255 [2024-12-15 07:09:11.780888] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.255 [2024-12-15 07:09:11.780896] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.255 [2024-12-15 07:09:11.791356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.255 qpair failed and we were unable to recover it. 00:28:50.255 [2024-12-15 07:09:11.800800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.255 [2024-12-15 07:09:11.800838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.255 [2024-12-15 07:09:11.800854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.255 [2024-12-15 07:09:11.800862] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.255 [2024-12-15 07:09:11.800871] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.255 [2024-12-15 07:09:11.811337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.255 qpair failed and we were unable to recover it. 00:28:50.255 [2024-12-15 07:09:11.820933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.255 [2024-12-15 07:09:11.820973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.255 [2024-12-15 07:09:11.820995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.255 [2024-12-15 07:09:11.821004] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.255 [2024-12-15 07:09:11.821012] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.255 [2024-12-15 07:09:11.831386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.255 qpair failed and we were unable to recover it. 00:28:50.255 [2024-12-15 07:09:11.841118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.255 [2024-12-15 07:09:11.841156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.255 [2024-12-15 07:09:11.841172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.255 [2024-12-15 07:09:11.841181] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.255 [2024-12-15 07:09:11.841189] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.255 [2024-12-15 07:09:11.851664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.255 qpair failed and we were unable to recover it. 00:28:50.255 [2024-12-15 07:09:11.861138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.255 [2024-12-15 07:09:11.861178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.255 [2024-12-15 07:09:11.861195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.255 [2024-12-15 07:09:11.861207] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.255 [2024-12-15 07:09:11.861215] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.255 [2024-12-15 07:09:11.871527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.255 qpair failed and we were unable to recover it. 00:28:50.255 [2024-12-15 07:09:11.881173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.256 [2024-12-15 07:09:11.881211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.256 [2024-12-15 07:09:11.881227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.256 [2024-12-15 07:09:11.881236] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.256 [2024-12-15 07:09:11.881244] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.256 [2024-12-15 07:09:11.891628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.256 qpair failed and we were unable to recover it. 00:28:50.515 [2024-12-15 07:09:11.901309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.515 [2024-12-15 07:09:11.901347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.515 [2024-12-15 07:09:11.901363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.515 [2024-12-15 07:09:11.901372] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.515 [2024-12-15 07:09:11.901380] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.515 [2024-12-15 07:09:11.911619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.515 qpair failed and we were unable to recover it. 00:28:50.515 [2024-12-15 07:09:11.921289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.515 [2024-12-15 07:09:11.921330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.515 [2024-12-15 07:09:11.921346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.515 [2024-12-15 07:09:11.921355] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.515 [2024-12-15 07:09:11.921363] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.515 [2024-12-15 07:09:11.931754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.515 qpair failed and we were unable to recover it. 00:28:50.515 [2024-12-15 07:09:11.941386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.515 [2024-12-15 07:09:11.941425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.515 [2024-12-15 07:09:11.941441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.515 [2024-12-15 07:09:11.941450] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.515 [2024-12-15 07:09:11.941458] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.515 [2024-12-15 07:09:11.951778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.515 qpair failed and we were unable to recover it. 00:28:50.515 [2024-12-15 07:09:11.961299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.515 [2024-12-15 07:09:11.961340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.515 [2024-12-15 07:09:11.961357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.515 [2024-12-15 07:09:11.961365] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.515 [2024-12-15 07:09:11.961374] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.515 [2024-12-15 07:09:11.972420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.515 qpair failed and we were unable to recover it. 00:28:50.515 [2024-12-15 07:09:11.981503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.515 [2024-12-15 07:09:11.981544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.515 [2024-12-15 07:09:11.981560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.515 [2024-12-15 07:09:11.981569] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.515 [2024-12-15 07:09:11.981577] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.515 [2024-12-15 07:09:11.991818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.515 qpair failed and we were unable to recover it. 00:28:50.515 [2024-12-15 07:09:12.001539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.515 [2024-12-15 07:09:12.001581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.515 [2024-12-15 07:09:12.001597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.515 [2024-12-15 07:09:12.001606] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.516 [2024-12-15 07:09:12.001614] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.516 [2024-12-15 07:09:12.012040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.516 qpair failed and we were unable to recover it. 00:28:50.516 [2024-12-15 07:09:12.021536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.516 [2024-12-15 07:09:12.021576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.516 [2024-12-15 07:09:12.021593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.516 [2024-12-15 07:09:12.021602] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.516 [2024-12-15 07:09:12.021610] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.516 [2024-12-15 07:09:12.032096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.516 qpair failed and we were unable to recover it. 00:28:50.516 [2024-12-15 07:09:12.041634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.516 [2024-12-15 07:09:12.041677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.516 [2024-12-15 07:09:12.041697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.516 [2024-12-15 07:09:12.041706] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.516 [2024-12-15 07:09:12.041714] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.516 [2024-12-15 07:09:12.051951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.516 qpair failed and we were unable to recover it. 00:28:50.516 [2024-12-15 07:09:12.061680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.516 [2024-12-15 07:09:12.061720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.516 [2024-12-15 07:09:12.061737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.516 [2024-12-15 07:09:12.061745] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.516 [2024-12-15 07:09:12.061754] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.516 [2024-12-15 07:09:12.071979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.516 qpair failed and we were unable to recover it. 00:28:50.516 [2024-12-15 07:09:12.081709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.516 [2024-12-15 07:09:12.081751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.516 [2024-12-15 07:09:12.081767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.516 [2024-12-15 07:09:12.081776] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.516 [2024-12-15 07:09:12.081785] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.516 [2024-12-15 07:09:12.092164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.516 qpair failed and we were unable to recover it. 00:28:50.516 [2024-12-15 07:09:12.101826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.516 [2024-12-15 07:09:12.101869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.516 [2024-12-15 07:09:12.101885] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.516 [2024-12-15 07:09:12.101894] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.516 [2024-12-15 07:09:12.101902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.516 [2024-12-15 07:09:12.112221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.516 qpair failed and we were unable to recover it. 00:28:50.516 [2024-12-15 07:09:12.121902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.516 [2024-12-15 07:09:12.121945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.516 [2024-12-15 07:09:12.121961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.516 [2024-12-15 07:09:12.121969] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.516 [2024-12-15 07:09:12.121985] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.516 [2024-12-15 07:09:12.132324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.516 qpair failed and we were unable to recover it. 00:28:50.516 [2024-12-15 07:09:12.141926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.516 [2024-12-15 07:09:12.141962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.516 [2024-12-15 07:09:12.141983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.516 [2024-12-15 07:09:12.141993] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.516 [2024-12-15 07:09:12.142002] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.516 [2024-12-15 07:09:12.152461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.516 qpair failed and we were unable to recover it. 00:28:50.776 [2024-12-15 07:09:12.161914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.776 [2024-12-15 07:09:12.161950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.776 [2024-12-15 07:09:12.161966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.776 [2024-12-15 07:09:12.161980] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.776 [2024-12-15 07:09:12.161989] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.776 [2024-12-15 07:09:12.172453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.776 qpair failed and we were unable to recover it. 00:28:50.776 [2024-12-15 07:09:12.182151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.776 [2024-12-15 07:09:12.182189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.776 [2024-12-15 07:09:12.182205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.776 [2024-12-15 07:09:12.182214] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.776 [2024-12-15 07:09:12.182222] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.776 [2024-12-15 07:09:12.192402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.776 qpair failed and we were unable to recover it. 00:28:50.776 [2024-12-15 07:09:12.202117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.776 [2024-12-15 07:09:12.202153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.776 [2024-12-15 07:09:12.202170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.776 [2024-12-15 07:09:12.202178] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.776 [2024-12-15 07:09:12.202187] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.776 [2024-12-15 07:09:12.212611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.776 qpair failed and we were unable to recover it. 00:28:50.776 [2024-12-15 07:09:12.222218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.776 [2024-12-15 07:09:12.222255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.776 [2024-12-15 07:09:12.222271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.776 [2024-12-15 07:09:12.222280] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.776 [2024-12-15 07:09:12.222288] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.776 [2024-12-15 07:09:12.232684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.776 qpair failed and we were unable to recover it. 00:28:50.776 [2024-12-15 07:09:12.242166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.776 [2024-12-15 07:09:12.242202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.776 [2024-12-15 07:09:12.242218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.776 [2024-12-15 07:09:12.242227] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.776 [2024-12-15 07:09:12.242236] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.776 [2024-12-15 07:09:12.252652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.776 qpair failed and we were unable to recover it. 00:28:50.776 [2024-12-15 07:09:12.262165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.776 [2024-12-15 07:09:12.262206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.776 [2024-12-15 07:09:12.262222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.776 [2024-12-15 07:09:12.262231] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.776 [2024-12-15 07:09:12.262239] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.776 [2024-12-15 07:09:12.272657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.776 qpair failed and we were unable to recover it. 00:28:50.776 [2024-12-15 07:09:12.282353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.776 [2024-12-15 07:09:12.282389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.776 [2024-12-15 07:09:12.282405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.776 [2024-12-15 07:09:12.282414] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.776 [2024-12-15 07:09:12.282423] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.776 [2024-12-15 07:09:12.292585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.776 qpair failed and we were unable to recover it. 00:28:50.776 [2024-12-15 07:09:12.302430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.776 [2024-12-15 07:09:12.302478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.776 [2024-12-15 07:09:12.302494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.776 [2024-12-15 07:09:12.302506] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.776 [2024-12-15 07:09:12.302515] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.776 [2024-12-15 07:09:12.312898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.776 qpair failed and we were unable to recover it. 00:28:50.776 [2024-12-15 07:09:12.322377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.776 [2024-12-15 07:09:12.322414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.776 [2024-12-15 07:09:12.322431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.776 [2024-12-15 07:09:12.322439] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.776 [2024-12-15 07:09:12.322448] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.776 [2024-12-15 07:09:12.332778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.776 qpair failed and we were unable to recover it. 00:28:50.776 [2024-12-15 07:09:12.342448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.776 [2024-12-15 07:09:12.342489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.776 [2024-12-15 07:09:12.342505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.776 [2024-12-15 07:09:12.342514] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.776 [2024-12-15 07:09:12.342522] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.776 [2024-12-15 07:09:12.352705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.776 qpair failed and we were unable to recover it. 00:28:50.776 [2024-12-15 07:09:12.362502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.776 [2024-12-15 07:09:12.362551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.776 [2024-12-15 07:09:12.362569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.776 [2024-12-15 07:09:12.362579] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.776 [2024-12-15 07:09:12.362589] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.776 [2024-12-15 07:09:12.372757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.776 qpair failed and we were unable to recover it. 00:28:50.776 [2024-12-15 07:09:12.382555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.777 [2024-12-15 07:09:12.382599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.777 [2024-12-15 07:09:12.382615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.777 [2024-12-15 07:09:12.382624] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.777 [2024-12-15 07:09:12.382632] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.777 [2024-12-15 07:09:12.392957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.777 qpair failed and we were unable to recover it. 00:28:50.777 [2024-12-15 07:09:12.402661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.777 [2024-12-15 07:09:12.402701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.777 [2024-12-15 07:09:12.402717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.777 [2024-12-15 07:09:12.402726] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.777 [2024-12-15 07:09:12.402734] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:50.777 [2024-12-15 07:09:12.412951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:50.777 qpair failed and we were unable to recover it. 00:28:51.036 [2024-12-15 07:09:12.422718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.036 [2024-12-15 07:09:12.422760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.036 [2024-12-15 07:09:12.422776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.036 [2024-12-15 07:09:12.422784] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.036 [2024-12-15 07:09:12.422793] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.036 [2024-12-15 07:09:12.433036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.036 qpair failed and we were unable to recover it. 00:28:51.036 [2024-12-15 07:09:12.442688] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.036 [2024-12-15 07:09:12.442725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.036 [2024-12-15 07:09:12.442741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.036 [2024-12-15 07:09:12.442750] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.036 [2024-12-15 07:09:12.442758] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.036 [2024-12-15 07:09:12.453207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.036 qpair failed and we were unable to recover it. 00:28:51.037 [2024-12-15 07:09:12.462724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.037 [2024-12-15 07:09:12.462765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.037 [2024-12-15 07:09:12.462781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.037 [2024-12-15 07:09:12.462790] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.037 [2024-12-15 07:09:12.462799] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.037 [2024-12-15 07:09:12.473310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.037 qpair failed and we were unable to recover it. 00:28:51.037 [2024-12-15 07:09:12.482872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.037 [2024-12-15 07:09:12.482912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.037 [2024-12-15 07:09:12.482931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.037 [2024-12-15 07:09:12.482940] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.037 [2024-12-15 07:09:12.482949] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.037 [2024-12-15 07:09:12.493211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.037 qpair failed and we were unable to recover it. 00:28:51.037 [2024-12-15 07:09:12.502890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.037 [2024-12-15 07:09:12.502930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.037 [2024-12-15 07:09:12.502947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.037 [2024-12-15 07:09:12.502956] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.037 [2024-12-15 07:09:12.502965] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.037 [2024-12-15 07:09:12.513391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.037 qpair failed and we were unable to recover it. 00:28:51.037 [2024-12-15 07:09:12.523011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.037 [2024-12-15 07:09:12.523049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.037 [2024-12-15 07:09:12.523065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.037 [2024-12-15 07:09:12.523074] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.037 [2024-12-15 07:09:12.523083] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.037 [2024-12-15 07:09:12.533513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.037 qpair failed and we were unable to recover it. 00:28:51.037 [2024-12-15 07:09:12.543013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.037 [2024-12-15 07:09:12.543052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.037 [2024-12-15 07:09:12.543068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.037 [2024-12-15 07:09:12.543077] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.037 [2024-12-15 07:09:12.543085] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.037 [2024-12-15 07:09:12.553489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.037 qpair failed and we were unable to recover it. 00:28:51.037 [2024-12-15 07:09:12.563066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.037 [2024-12-15 07:09:12.563102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.037 [2024-12-15 07:09:12.563118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.037 [2024-12-15 07:09:12.563127] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.037 [2024-12-15 07:09:12.563136] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.037 [2024-12-15 07:09:12.573572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.037 qpair failed and we were unable to recover it. 00:28:51.037 [2024-12-15 07:09:12.583191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.037 [2024-12-15 07:09:12.583234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.037 [2024-12-15 07:09:12.583250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.037 [2024-12-15 07:09:12.583259] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.037 [2024-12-15 07:09:12.583268] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.037 [2024-12-15 07:09:12.593577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.037 qpair failed and we were unable to recover it. 00:28:51.037 [2024-12-15 07:09:12.603181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.037 [2024-12-15 07:09:12.603223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.037 [2024-12-15 07:09:12.603239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.037 [2024-12-15 07:09:12.603248] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.037 [2024-12-15 07:09:12.603256] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.037 [2024-12-15 07:09:12.614277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.037 qpair failed and we were unable to recover it. 00:28:51.037 [2024-12-15 07:09:12.623230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.037 [2024-12-15 07:09:12.623271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.037 [2024-12-15 07:09:12.623287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.037 [2024-12-15 07:09:12.623295] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.037 [2024-12-15 07:09:12.623304] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.037 [2024-12-15 07:09:12.633733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.037 qpair failed and we were unable to recover it. 00:28:51.037 [2024-12-15 07:09:12.643385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.037 [2024-12-15 07:09:12.643426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.037 [2024-12-15 07:09:12.643442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.037 [2024-12-15 07:09:12.643451] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.037 [2024-12-15 07:09:12.643459] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.037 [2024-12-15 07:09:12.653710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.037 qpair failed and we were unable to recover it. 00:28:51.037 [2024-12-15 07:09:12.663432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.037 [2024-12-15 07:09:12.663478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.037 [2024-12-15 07:09:12.663494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.037 [2024-12-15 07:09:12.663503] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.037 [2024-12-15 07:09:12.663511] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.037 [2024-12-15 07:09:12.673713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.037 qpair failed and we were unable to recover it. 00:28:51.297 [2024-12-15 07:09:12.683505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.297 [2024-12-15 07:09:12.683546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.297 [2024-12-15 07:09:12.683562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.297 [2024-12-15 07:09:12.683570] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.297 [2024-12-15 07:09:12.683579] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.297 [2024-12-15 07:09:12.693994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-12-15 07:09:12.703530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.297 [2024-12-15 07:09:12.703570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.297 [2024-12-15 07:09:12.703587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.297 [2024-12-15 07:09:12.703596] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.297 [2024-12-15 07:09:12.703604] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.297 [2024-12-15 07:09:12.713843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-12-15 07:09:12.723628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.297 [2024-12-15 07:09:12.723669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.297 [2024-12-15 07:09:12.723686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.297 [2024-12-15 07:09:12.723694] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.297 [2024-12-15 07:09:12.723703] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.297 [2024-12-15 07:09:12.733899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-12-15 07:09:12.743692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.297 [2024-12-15 07:09:12.743733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.297 [2024-12-15 07:09:12.743751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.297 [2024-12-15 07:09:12.743763] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.297 [2024-12-15 07:09:12.743772] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.297 [2024-12-15 07:09:12.753942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-12-15 07:09:12.763662] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.297 [2024-12-15 07:09:12.763706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.297 [2024-12-15 07:09:12.763723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.297 [2024-12-15 07:09:12.763732] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.297 [2024-12-15 07:09:12.763740] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.297 [2024-12-15 07:09:12.774345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-12-15 07:09:12.783830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.297 [2024-12-15 07:09:12.783868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.297 [2024-12-15 07:09:12.783885] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.297 [2024-12-15 07:09:12.783893] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.297 [2024-12-15 07:09:12.783902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.297 [2024-12-15 07:09:12.794221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-12-15 07:09:12.803818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.297 [2024-12-15 07:09:12.803852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.297 [2024-12-15 07:09:12.803868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.298 [2024-12-15 07:09:12.803877] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.298 [2024-12-15 07:09:12.803885] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.298 [2024-12-15 07:09:12.814303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-12-15 07:09:12.823875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.298 [2024-12-15 07:09:12.823917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.298 [2024-12-15 07:09:12.823933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.298 [2024-12-15 07:09:12.823942] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.298 [2024-12-15 07:09:12.823951] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.298 [2024-12-15 07:09:12.834154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-12-15 07:09:12.843961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.298 [2024-12-15 07:09:12.844011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.298 [2024-12-15 07:09:12.844028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.298 [2024-12-15 07:09:12.844037] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.298 [2024-12-15 07:09:12.844046] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.298 [2024-12-15 07:09:12.854417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-12-15 07:09:12.864017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.298 [2024-12-15 07:09:12.864051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.298 [2024-12-15 07:09:12.864068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.298 [2024-12-15 07:09:12.864077] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.298 [2024-12-15 07:09:12.864085] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.298 [2024-12-15 07:09:12.874357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-12-15 07:09:12.884192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.298 [2024-12-15 07:09:12.884235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.298 [2024-12-15 07:09:12.884252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.298 [2024-12-15 07:09:12.884261] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.298 [2024-12-15 07:09:12.884269] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.298 [2024-12-15 07:09:12.894492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-12-15 07:09:12.904273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.298 [2024-12-15 07:09:12.904316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.298 [2024-12-15 07:09:12.904332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.298 [2024-12-15 07:09:12.904342] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.298 [2024-12-15 07:09:12.904350] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.298 [2024-12-15 07:09:12.914496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-12-15 07:09:12.924334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.298 [2024-12-15 07:09:12.924379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.298 [2024-12-15 07:09:12.924399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.298 [2024-12-15 07:09:12.924408] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.298 [2024-12-15 07:09:12.924417] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.298 [2024-12-15 07:09:12.934681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.557 [2024-12-15 07:09:12.944345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.557 [2024-12-15 07:09:12.944390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.557 [2024-12-15 07:09:12.944407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.557 [2024-12-15 07:09:12.944415] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.557 [2024-12-15 07:09:12.944424] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.557 [2024-12-15 07:09:12.954621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.558 [2024-12-15 07:09:12.964350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.558 [2024-12-15 07:09:12.964393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.558 [2024-12-15 07:09:12.964410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.558 [2024-12-15 07:09:12.964419] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.558 [2024-12-15 07:09:12.964428] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.558 [2024-12-15 07:09:12.974784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-12-15 07:09:12.984338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.558 [2024-12-15 07:09:12.984379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.558 [2024-12-15 07:09:12.984396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.558 [2024-12-15 07:09:12.984405] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.558 [2024-12-15 07:09:12.984414] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.558 [2024-12-15 07:09:12.994790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-12-15 07:09:13.004588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.558 [2024-12-15 07:09:13.004632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.558 [2024-12-15 07:09:13.004648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.558 [2024-12-15 07:09:13.004657] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.558 [2024-12-15 07:09:13.004665] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.558 [2024-12-15 07:09:13.014826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-12-15 07:09:13.024572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.558 [2024-12-15 07:09:13.024610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.558 [2024-12-15 07:09:13.024626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.558 [2024-12-15 07:09:13.024635] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.558 [2024-12-15 07:09:13.024643] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.558 [2024-12-15 07:09:13.034909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-12-15 07:09:13.044602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.558 [2024-12-15 07:09:13.044645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.558 [2024-12-15 07:09:13.044661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.558 [2024-12-15 07:09:13.044670] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.558 [2024-12-15 07:09:13.044679] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.558 [2024-12-15 07:09:13.054982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-12-15 07:09:13.064738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.558 [2024-12-15 07:09:13.064780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.558 [2024-12-15 07:09:13.064796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.558 [2024-12-15 07:09:13.064805] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.558 [2024-12-15 07:09:13.064814] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.558 [2024-12-15 07:09:13.075046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-12-15 07:09:13.084875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.558 [2024-12-15 07:09:13.084918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.558 [2024-12-15 07:09:13.084934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.558 [2024-12-15 07:09:13.084943] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.558 [2024-12-15 07:09:13.084951] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.558 [2024-12-15 07:09:13.095087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-12-15 07:09:13.104828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.558 [2024-12-15 07:09:13.104867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.558 [2024-12-15 07:09:13.104887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.558 [2024-12-15 07:09:13.104896] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.558 [2024-12-15 07:09:13.104904] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.558 [2024-12-15 07:09:13.115160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-12-15 07:09:13.124957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.558 [2024-12-15 07:09:13.124995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.558 [2024-12-15 07:09:13.125012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.558 [2024-12-15 07:09:13.125021] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.558 [2024-12-15 07:09:13.125029] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.558 [2024-12-15 07:09:13.135224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-12-15 07:09:13.144945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.558 [2024-12-15 07:09:13.144991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.558 [2024-12-15 07:09:13.145008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.558 [2024-12-15 07:09:13.145017] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.558 [2024-12-15 07:09:13.145025] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.558 [2024-12-15 07:09:13.155370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-12-15 07:09:13.165069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.558 [2024-12-15 07:09:13.165115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.558 [2024-12-15 07:09:13.165131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.558 [2024-12-15 07:09:13.165140] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.558 [2024-12-15 07:09:13.165148] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.558 [2024-12-15 07:09:13.175248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-12-15 07:09:13.185151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.558 [2024-12-15 07:09:13.185192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.558 [2024-12-15 07:09:13.185209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.558 [2024-12-15 07:09:13.185218] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.558 [2024-12-15 07:09:13.185229] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.558 [2024-12-15 07:09:13.195352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.818 [2024-12-15 07:09:13.205185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.818 [2024-12-15 07:09:13.205225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.818 [2024-12-15 07:09:13.205242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.818 [2024-12-15 07:09:13.205250] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.818 [2024-12-15 07:09:13.205259] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.818 [2024-12-15 07:09:13.215526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.818 qpair failed and we were unable to recover it. 00:28:51.818 [2024-12-15 07:09:13.225188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.818 [2024-12-15 07:09:13.225225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.818 [2024-12-15 07:09:13.225241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.818 [2024-12-15 07:09:13.225250] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.818 [2024-12-15 07:09:13.225258] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.818 [2024-12-15 07:09:13.235692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.818 qpair failed and we were unable to recover it. 00:28:51.818 [2024-12-15 07:09:13.245375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.818 [2024-12-15 07:09:13.245418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.818 [2024-12-15 07:09:13.245434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.818 [2024-12-15 07:09:13.245443] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.818 [2024-12-15 07:09:13.245451] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.818 [2024-12-15 07:09:13.256195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.818 qpair failed and we were unable to recover it. 00:28:51.818 [2024-12-15 07:09:13.265270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.818 [2024-12-15 07:09:13.265312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.818 [2024-12-15 07:09:13.265328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.818 [2024-12-15 07:09:13.265336] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.818 [2024-12-15 07:09:13.265344] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.818 [2024-12-15 07:09:13.275616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.818 qpair failed and we were unable to recover it. 00:28:51.818 [2024-12-15 07:09:13.285392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.818 [2024-12-15 07:09:13.285431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.818 [2024-12-15 07:09:13.285447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.818 [2024-12-15 07:09:13.285456] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.818 [2024-12-15 07:09:13.285464] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.818 [2024-12-15 07:09:13.295817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.818 qpair failed and we were unable to recover it. 00:28:51.818 [2024-12-15 07:09:13.305499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.818 [2024-12-15 07:09:13.305536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.818 [2024-12-15 07:09:13.305552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.818 [2024-12-15 07:09:13.305561] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.818 [2024-12-15 07:09:13.305570] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.818 [2024-12-15 07:09:13.315768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.819 qpair failed and we were unable to recover it. 00:28:51.819 [2024-12-15 07:09:13.325510] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.819 [2024-12-15 07:09:13.325547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.819 [2024-12-15 07:09:13.325564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.819 [2024-12-15 07:09:13.325572] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.819 [2024-12-15 07:09:13.325581] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.819 [2024-12-15 07:09:13.335806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.819 qpair failed and we were unable to recover it. 00:28:51.819 [2024-12-15 07:09:13.345573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.819 [2024-12-15 07:09:13.345610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.819 [2024-12-15 07:09:13.345629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.819 [2024-12-15 07:09:13.345639] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.819 [2024-12-15 07:09:13.345649] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.819 [2024-12-15 07:09:13.355885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.819 qpair failed and we were unable to recover it. 00:28:51.819 [2024-12-15 07:09:13.365666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.819 [2024-12-15 07:09:13.365700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.819 [2024-12-15 07:09:13.365720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.819 [2024-12-15 07:09:13.365729] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.819 [2024-12-15 07:09:13.365737] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.819 [2024-12-15 07:09:13.376019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.819 qpair failed and we were unable to recover it. 00:28:51.819 [2024-12-15 07:09:13.385624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.819 [2024-12-15 07:09:13.385664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.819 [2024-12-15 07:09:13.385680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.819 [2024-12-15 07:09:13.385689] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.819 [2024-12-15 07:09:13.385698] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.819 [2024-12-15 07:09:13.395986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.819 qpair failed and we were unable to recover it. 00:28:51.819 [2024-12-15 07:09:13.405659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.819 [2024-12-15 07:09:13.405702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.819 [2024-12-15 07:09:13.405718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.819 [2024-12-15 07:09:13.405727] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.819 [2024-12-15 07:09:13.405735] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.819 [2024-12-15 07:09:13.416275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.819 qpair failed and we were unable to recover it. 00:28:51.819 [2024-12-15 07:09:13.425751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.819 [2024-12-15 07:09:13.425793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.819 [2024-12-15 07:09:13.425810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.819 [2024-12-15 07:09:13.425819] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.819 [2024-12-15 07:09:13.425827] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.819 [2024-12-15 07:09:13.436148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.819 qpair failed and we were unable to recover it. 00:28:51.819 [2024-12-15 07:09:13.445727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.819 [2024-12-15 07:09:13.445764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.819 [2024-12-15 07:09:13.445780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.819 [2024-12-15 07:09:13.445789] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.819 [2024-12-15 07:09:13.445798] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:51.819 [2024-12-15 07:09:13.456143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:51.819 qpair failed and we were unable to recover it. 00:28:52.092 [2024-12-15 07:09:13.465864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.092 [2024-12-15 07:09:13.465904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.092 [2024-12-15 07:09:13.465924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.092 [2024-12-15 07:09:13.465933] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.092 [2024-12-15 07:09:13.465942] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.092 [2024-12-15 07:09:13.476202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.092 qpair failed and we were unable to recover it. 00:28:52.092 [2024-12-15 07:09:13.485805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.092 [2024-12-15 07:09:13.485843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.092 [2024-12-15 07:09:13.485860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.092 [2024-12-15 07:09:13.485869] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.092 [2024-12-15 07:09:13.485878] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.092 [2024-12-15 07:09:13.496260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.092 qpair failed and we were unable to recover it. 00:28:52.092 [2024-12-15 07:09:13.505946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.092 [2024-12-15 07:09:13.505986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.092 [2024-12-15 07:09:13.506003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.092 [2024-12-15 07:09:13.506012] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.092 [2024-12-15 07:09:13.506020] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.092 [2024-12-15 07:09:13.516287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.092 qpair failed and we were unable to recover it. 00:28:52.092 [2024-12-15 07:09:13.526044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.092 [2024-12-15 07:09:13.526085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.092 [2024-12-15 07:09:13.526102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.092 [2024-12-15 07:09:13.526111] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.092 [2024-12-15 07:09:13.526120] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.092 [2024-12-15 07:09:13.536340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.092 qpair failed and we were unable to recover it. 00:28:52.092 [2024-12-15 07:09:13.545989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.092 [2024-12-15 07:09:13.546028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.092 [2024-12-15 07:09:13.546048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.092 [2024-12-15 07:09:13.546057] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.092 [2024-12-15 07:09:13.546065] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.092 [2024-12-15 07:09:13.556108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.092 qpair failed and we were unable to recover it. 00:28:52.092 [2024-12-15 07:09:13.566245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.092 [2024-12-15 07:09:13.566289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.092 [2024-12-15 07:09:13.566306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.092 [2024-12-15 07:09:13.566315] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.092 [2024-12-15 07:09:13.566323] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.092 [2024-12-15 07:09:13.576609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.092 qpair failed and we were unable to recover it. 00:28:52.092 [2024-12-15 07:09:13.586181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.092 [2024-12-15 07:09:13.586221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.092 [2024-12-15 07:09:13.586238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.092 [2024-12-15 07:09:13.586247] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.092 [2024-12-15 07:09:13.586255] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.092 [2024-12-15 07:09:13.596589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.092 qpair failed and we were unable to recover it. 00:28:52.092 [2024-12-15 07:09:13.606249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.092 [2024-12-15 07:09:13.606287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.092 [2024-12-15 07:09:13.606303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.092 [2024-12-15 07:09:13.606312] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.092 [2024-12-15 07:09:13.606320] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.092 [2024-12-15 07:09:13.616530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.092 qpair failed and we were unable to recover it. 00:28:52.092 [2024-12-15 07:09:13.626292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.092 [2024-12-15 07:09:13.626332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.092 [2024-12-15 07:09:13.626348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.092 [2024-12-15 07:09:13.626357] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.102 [2024-12-15 07:09:13.626369] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.102 [2024-12-15 07:09:13.636659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.102 qpair failed and we were unable to recover it. 00:28:52.102 [2024-12-15 07:09:13.646306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.102 [2024-12-15 07:09:13.646348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.102 [2024-12-15 07:09:13.646365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.102 [2024-12-15 07:09:13.646374] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.102 [2024-12-15 07:09:13.646382] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.102 [2024-12-15 07:09:13.656717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.102 qpair failed and we were unable to recover it. 00:28:52.102 [2024-12-15 07:09:13.666466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.102 [2024-12-15 07:09:13.666508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.102 [2024-12-15 07:09:13.666524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.102 [2024-12-15 07:09:13.666533] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.102 [2024-12-15 07:09:13.666541] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.102 [2024-12-15 07:09:13.676876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.102 qpair failed and we were unable to recover it. 00:28:52.102 [2024-12-15 07:09:13.686456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.102 [2024-12-15 07:09:13.686493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.102 [2024-12-15 07:09:13.686509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.102 [2024-12-15 07:09:13.686518] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.102 [2024-12-15 07:09:13.686527] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.102 [2024-12-15 07:09:13.696801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.102 qpair failed and we were unable to recover it. 00:28:52.102 [2024-12-15 07:09:13.706534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.102 [2024-12-15 07:09:13.706578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.102 [2024-12-15 07:09:13.706595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.102 [2024-12-15 07:09:13.706604] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.102 [2024-12-15 07:09:13.706612] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.102 [2024-12-15 07:09:13.716995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.102 qpair failed and we were unable to recover it. 00:28:52.372 [2024-12-15 07:09:13.726654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.372 [2024-12-15 07:09:13.726693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.372 [2024-12-15 07:09:13.726709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.372 [2024-12-15 07:09:13.726718] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.372 [2024-12-15 07:09:13.726727] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.372 [2024-12-15 07:09:13.737051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.372 qpair failed and we were unable to recover it. 00:28:52.372 [2024-12-15 07:09:13.746568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.373 [2024-12-15 07:09:13.746610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.373 [2024-12-15 07:09:13.746628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.373 [2024-12-15 07:09:13.746637] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.373 [2024-12-15 07:09:13.746645] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.373 [2024-12-15 07:09:13.757064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.373 qpair failed and we were unable to recover it. 00:28:52.373 [2024-12-15 07:09:13.766717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.373 [2024-12-15 07:09:13.766757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.373 [2024-12-15 07:09:13.766774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.373 [2024-12-15 07:09:13.766783] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.373 [2024-12-15 07:09:13.766791] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.373 [2024-12-15 07:09:13.777219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.373 qpair failed and we were unable to recover it. 00:28:52.373 [2024-12-15 07:09:13.786815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.373 [2024-12-15 07:09:13.786856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.373 [2024-12-15 07:09:13.786873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.373 [2024-12-15 07:09:13.786882] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.373 [2024-12-15 07:09:13.786890] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.373 [2024-12-15 07:09:13.797125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.373 qpair failed and we were unable to recover it. 00:28:52.373 [2024-12-15 07:09:13.806875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.373 [2024-12-15 07:09:13.806923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.373 [2024-12-15 07:09:13.806939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.373 [2024-12-15 07:09:13.806951] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.373 [2024-12-15 07:09:13.806959] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.373 [2024-12-15 07:09:13.817317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.373 qpair failed and we were unable to recover it. 00:28:52.373 [2024-12-15 07:09:13.826914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.373 [2024-12-15 07:09:13.826955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.373 [2024-12-15 07:09:13.826971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.373 [2024-12-15 07:09:13.826986] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.373 [2024-12-15 07:09:13.826994] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.373 [2024-12-15 07:09:13.837463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.373 qpair failed and we were unable to recover it. 00:28:52.373 [2024-12-15 07:09:13.847024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.373 [2024-12-15 07:09:13.847064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.373 [2024-12-15 07:09:13.847080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.373 [2024-12-15 07:09:13.847089] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.373 [2024-12-15 07:09:13.847098] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.373 [2024-12-15 07:09:13.857448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.373 qpair failed and we were unable to recover it. 00:28:52.373 [2024-12-15 07:09:13.867073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.373 [2024-12-15 07:09:13.867114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.373 [2024-12-15 07:09:13.867130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.373 [2024-12-15 07:09:13.867139] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.373 [2024-12-15 07:09:13.867148] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.373 [2024-12-15 07:09:13.877504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.373 qpair failed and we were unable to recover it. 00:28:52.373 [2024-12-15 07:09:13.887064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.373 [2024-12-15 07:09:13.887110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.373 [2024-12-15 07:09:13.887126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.373 [2024-12-15 07:09:13.887135] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.373 [2024-12-15 07:09:13.887144] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.373 [2024-12-15 07:09:13.898040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.373 qpair failed and we were unable to recover it. 00:28:52.373 [2024-12-15 07:09:13.907172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.373 [2024-12-15 07:09:13.907209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.373 [2024-12-15 07:09:13.907226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.373 [2024-12-15 07:09:13.907235] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.373 [2024-12-15 07:09:13.907244] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.373 [2024-12-15 07:09:13.917413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.373 qpair failed and we were unable to recover it. 00:28:52.373 [2024-12-15 07:09:13.927224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.373 [2024-12-15 07:09:13.927262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.373 [2024-12-15 07:09:13.927279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.373 [2024-12-15 07:09:13.927288] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.373 [2024-12-15 07:09:13.927296] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.373 [2024-12-15 07:09:13.937653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.373 qpair failed and we were unable to recover it. 00:28:52.373 [2024-12-15 07:09:13.947166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.373 [2024-12-15 07:09:13.947208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.373 [2024-12-15 07:09:13.947224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.373 [2024-12-15 07:09:13.947233] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.373 [2024-12-15 07:09:13.947241] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.373 [2024-12-15 07:09:13.957552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.373 qpair failed and we were unable to recover it. 00:28:52.373 [2024-12-15 07:09:13.967328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.373 [2024-12-15 07:09:13.967369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.373 [2024-12-15 07:09:13.967385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.373 [2024-12-15 07:09:13.967393] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.373 [2024-12-15 07:09:13.967402] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.373 [2024-12-15 07:09:13.977796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.373 qpair failed and we were unable to recover it. 00:28:52.373 [2024-12-15 07:09:13.987431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.374 [2024-12-15 07:09:13.987468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.374 [2024-12-15 07:09:13.987488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.374 [2024-12-15 07:09:13.987497] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.374 [2024-12-15 07:09:13.987505] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.374 [2024-12-15 07:09:13.997875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.374 qpair failed and we were unable to recover it. 00:28:52.374 [2024-12-15 07:09:14.007447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.374 [2024-12-15 07:09:14.007482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.374 [2024-12-15 07:09:14.007499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.374 [2024-12-15 07:09:14.007508] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.374 [2024-12-15 07:09:14.007517] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.639 [2024-12-15 07:09:14.017820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.639 qpair failed and we were unable to recover it. 00:28:52.639 [2024-12-15 07:09:14.027536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.639 [2024-12-15 07:09:14.027576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.639 [2024-12-15 07:09:14.027592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.639 [2024-12-15 07:09:14.027601] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.639 [2024-12-15 07:09:14.027610] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.639 [2024-12-15 07:09:14.037981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.639 qpair failed and we were unable to recover it. 00:28:52.639 [2024-12-15 07:09:14.047487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.639 [2024-12-15 07:09:14.047535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.639 [2024-12-15 07:09:14.047551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.639 [2024-12-15 07:09:14.047560] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.639 [2024-12-15 07:09:14.047569] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.639 [2024-12-15 07:09:14.057914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.639 qpair failed and we were unable to recover it. 00:28:52.639 [2024-12-15 07:09:14.067605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.639 [2024-12-15 07:09:14.067642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.639 [2024-12-15 07:09:14.067659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.639 [2024-12-15 07:09:14.067668] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.639 [2024-12-15 07:09:14.067680] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.639 [2024-12-15 07:09:14.078081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.639 qpair failed and we were unable to recover it. 00:28:52.639 [2024-12-15 07:09:14.087728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.639 [2024-12-15 07:09:14.087769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.639 [2024-12-15 07:09:14.087786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.639 [2024-12-15 07:09:14.087795] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.639 [2024-12-15 07:09:14.087803] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.639 [2024-12-15 07:09:14.098248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.639 qpair failed and we were unable to recover it. 00:28:52.639 [2024-12-15 07:09:14.107740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.639 [2024-12-15 07:09:14.107780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.639 [2024-12-15 07:09:14.107797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.639 [2024-12-15 07:09:14.107806] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.639 [2024-12-15 07:09:14.107814] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.639 [2024-12-15 07:09:14.118147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.639 qpair failed and we were unable to recover it. 00:28:52.639 [2024-12-15 07:09:14.127752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.639 [2024-12-15 07:09:14.127792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.639 [2024-12-15 07:09:14.127808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.639 [2024-12-15 07:09:14.127817] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.639 [2024-12-15 07:09:14.127825] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.639 [2024-12-15 07:09:14.138400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.639 qpair failed and we were unable to recover it. 00:28:52.639 [2024-12-15 07:09:14.147736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.639 [2024-12-15 07:09:14.147777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.639 [2024-12-15 07:09:14.147792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.639 [2024-12-15 07:09:14.147801] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.639 [2024-12-15 07:09:14.147809] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.639 [2024-12-15 07:09:14.158206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.639 qpair failed and we were unable to recover it. 00:28:52.639 [2024-12-15 07:09:14.167879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.639 [2024-12-15 07:09:14.167921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.639 [2024-12-15 07:09:14.167937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.639 [2024-12-15 07:09:14.167946] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.639 [2024-12-15 07:09:14.167955] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.639 [2024-12-15 07:09:14.178364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.639 qpair failed and we were unable to recover it. 00:28:52.639 [2024-12-15 07:09:14.187917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.639 [2024-12-15 07:09:14.187957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.639 [2024-12-15 07:09:14.187980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.639 [2024-12-15 07:09:14.187989] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.639 [2024-12-15 07:09:14.187998] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.639 [2024-12-15 07:09:14.198279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.639 qpair failed and we were unable to recover it. 00:28:52.639 [2024-12-15 07:09:14.208028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.639 [2024-12-15 07:09:14.208071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.639 [2024-12-15 07:09:14.208087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.639 [2024-12-15 07:09:14.208096] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.639 [2024-12-15 07:09:14.208105] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.639 [2024-12-15 07:09:14.218471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.639 qpair failed and we were unable to recover it. 00:28:52.639 [2024-12-15 07:09:14.228030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.639 [2024-12-15 07:09:14.228074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.639 [2024-12-15 07:09:14.228090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.639 [2024-12-15 07:09:14.228099] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.640 [2024-12-15 07:09:14.228107] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.640 [2024-12-15 07:09:14.238341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.640 qpair failed and we were unable to recover it. 00:28:52.640 [2024-12-15 07:09:14.248204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.640 [2024-12-15 07:09:14.248245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.640 [2024-12-15 07:09:14.248262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.640 [2024-12-15 07:09:14.248274] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.640 [2024-12-15 07:09:14.248283] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.640 [2024-12-15 07:09:14.258588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.640 qpair failed and we were unable to recover it. 00:28:52.640 [2024-12-15 07:09:14.268159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.640 [2024-12-15 07:09:14.268198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.640 [2024-12-15 07:09:14.268215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.640 [2024-12-15 07:09:14.268224] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.640 [2024-12-15 07:09:14.268232] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.899 [2024-12-15 07:09:14.278655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.899 qpair failed and we were unable to recover it. 00:28:52.899 [2024-12-15 07:09:14.288211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.899 [2024-12-15 07:09:14.288250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.899 [2024-12-15 07:09:14.288266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.899 [2024-12-15 07:09:14.288275] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.899 [2024-12-15 07:09:14.288283] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.899 [2024-12-15 07:09:14.298571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.899 qpair failed and we were unable to recover it. 00:28:52.899 [2024-12-15 07:09:14.308216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.899 [2024-12-15 07:09:14.308256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.899 [2024-12-15 07:09:14.308272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.899 [2024-12-15 07:09:14.308281] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.899 [2024-12-15 07:09:14.308290] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.899 [2024-12-15 07:09:14.318708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.899 qpair failed and we were unable to recover it. 00:28:52.899 [2024-12-15 07:09:14.328329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.899 [2024-12-15 07:09:14.328370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.899 [2024-12-15 07:09:14.328387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.899 [2024-12-15 07:09:14.328396] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.899 [2024-12-15 07:09:14.328405] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.899 [2024-12-15 07:09:14.338713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.899 qpair failed and we were unable to recover it. 00:28:52.899 [2024-12-15 07:09:14.348463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.899 [2024-12-15 07:09:14.348505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.899 [2024-12-15 07:09:14.348522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.899 [2024-12-15 07:09:14.348531] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.899 [2024-12-15 07:09:14.348539] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.899 [2024-12-15 07:09:14.358712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.899 qpair failed and we were unable to recover it. 00:28:52.899 [2024-12-15 07:09:14.368430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.899 [2024-12-15 07:09:14.368470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.899 [2024-12-15 07:09:14.368486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.899 [2024-12-15 07:09:14.368495] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.899 [2024-12-15 07:09:14.368504] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.899 [2024-12-15 07:09:14.379040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.899 qpair failed and we were unable to recover it. 00:28:52.899 [2024-12-15 07:09:14.388603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.899 [2024-12-15 07:09:14.388643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.899 [2024-12-15 07:09:14.388660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.899 [2024-12-15 07:09:14.388668] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.899 [2024-12-15 07:09:14.388677] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.899 [2024-12-15 07:09:14.398905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.899 qpair failed and we were unable to recover it. 00:28:52.899 [2024-12-15 07:09:14.408645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.899 [2024-12-15 07:09:14.408684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.899 [2024-12-15 07:09:14.408700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.899 [2024-12-15 07:09:14.408709] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.899 [2024-12-15 07:09:14.408718] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.899 [2024-12-15 07:09:14.419141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.899 qpair failed and we were unable to recover it. 00:28:52.899 [2024-12-15 07:09:14.428660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.899 [2024-12-15 07:09:14.428703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.899 [2024-12-15 07:09:14.428722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.899 [2024-12-15 07:09:14.428731] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.899 [2024-12-15 07:09:14.428739] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.900 [2024-12-15 07:09:14.439049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.900 qpair failed and we were unable to recover it. 00:28:52.900 [2024-12-15 07:09:14.448720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.900 [2024-12-15 07:09:14.448758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.900 [2024-12-15 07:09:14.448774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.900 [2024-12-15 07:09:14.448783] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.900 [2024-12-15 07:09:14.448791] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.900 [2024-12-15 07:09:14.459194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.900 qpair failed and we were unable to recover it. 00:28:52.900 [2024-12-15 07:09:14.468738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.900 [2024-12-15 07:09:14.468773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.900 [2024-12-15 07:09:14.468790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.900 [2024-12-15 07:09:14.468799] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.900 [2024-12-15 07:09:14.468807] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.900 [2024-12-15 07:09:14.478987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.900 qpair failed and we were unable to recover it. 00:28:52.900 [2024-12-15 07:09:14.488720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.900 [2024-12-15 07:09:14.488759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.900 [2024-12-15 07:09:14.488775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.900 [2024-12-15 07:09:14.488784] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.900 [2024-12-15 07:09:14.488793] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.900 [2024-12-15 07:09:14.499296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.900 qpair failed and we were unable to recover it. 00:28:52.900 [2024-12-15 07:09:14.508849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.900 [2024-12-15 07:09:14.508888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.900 [2024-12-15 07:09:14.508905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.900 [2024-12-15 07:09:14.508914] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.900 [2024-12-15 07:09:14.508922] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:52.900 [2024-12-15 07:09:14.519265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.900 qpair failed and we were unable to recover it. 00:28:52.900 [2024-12-15 07:09:14.529098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.900 [2024-12-15 07:09:14.529141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.900 [2024-12-15 07:09:14.529157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.900 [2024-12-15 07:09:14.529165] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.900 [2024-12-15 07:09:14.529174] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:53.159 [2024-12-15 07:09:14.539993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.159 qpair failed and we were unable to recover it. 00:28:53.159 [2024-12-15 07:09:14.548981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.159 [2024-12-15 07:09:14.549022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.159 [2024-12-15 07:09:14.549039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.159 [2024-12-15 07:09:14.549048] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.159 [2024-12-15 07:09:14.549057] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:53.159 [2024-12-15 07:09:14.559290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.159 qpair failed and we were unable to recover it. 00:28:53.159 [2024-12-15 07:09:14.569051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.159 [2024-12-15 07:09:14.569091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.159 [2024-12-15 07:09:14.569108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.159 [2024-12-15 07:09:14.569116] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.159 [2024-12-15 07:09:14.569125] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:53.159 [2024-12-15 07:09:14.579781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.159 qpair failed and we were unable to recover it. 00:28:53.159 [2024-12-15 07:09:14.589191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.159 [2024-12-15 07:09:14.589231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.159 [2024-12-15 07:09:14.589248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.159 [2024-12-15 07:09:14.589257] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.159 [2024-12-15 07:09:14.589266] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:28:53.159 [2024-12-15 07:09:14.599644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.159 qpair failed and we were unable to recover it. 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Read completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.095 Write completed with error (sct=0, sc=8) 00:28:54.095 starting I/O failed 00:28:54.096 Read completed with error (sct=0, sc=8) 00:28:54.096 starting I/O failed 00:28:54.096 [2024-12-15 07:09:15.604872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.096 [2024-12-15 07:09:15.611859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.096 [2024-12-15 07:09:15.611903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.096 [2024-12-15 07:09:15.611922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.096 [2024-12-15 07:09:15.611932] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.096 [2024-12-15 07:09:15.611940] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:54.096 [2024-12-15 07:09:15.622643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.096 qpair failed and we were unable to recover it. 00:28:54.096 [2024-12-15 07:09:15.632252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.096 [2024-12-15 07:09:15.632291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.096 [2024-12-15 07:09:15.632309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.096 [2024-12-15 07:09:15.632318] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.096 [2024-12-15 07:09:15.632326] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:28:54.096 [2024-12-15 07:09:15.642592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.096 qpair failed and we were unable to recover it. 00:28:54.096 [2024-12-15 07:09:15.652309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.096 [2024-12-15 07:09:15.652349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.096 [2024-12-15 07:09:15.652374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.096 [2024-12-15 07:09:15.652385] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.096 [2024-12-15 07:09:15.652394] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:54.096 [2024-12-15 07:09:15.662792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.096 qpair failed and we were unable to recover it. 00:28:54.096 [2024-12-15 07:09:15.672299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.096 [2024-12-15 07:09:15.672341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.096 [2024-12-15 07:09:15.672357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.096 [2024-12-15 07:09:15.672367] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.096 [2024-12-15 07:09:15.672375] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:28:54.096 [2024-12-15 07:09:15.682782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.096 qpair failed and we were unable to recover it. 00:28:54.096 [2024-12-15 07:09:15.682914] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:54.096 A controller has encountered a failure and is being reset. 00:28:54.096 [2024-12-15 07:09:15.692443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.096 [2024-12-15 07:09:15.692486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.096 [2024-12-15 07:09:15.692514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.096 [2024-12-15 07:09:15.692528] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.096 [2024-12-15 07:09:15.692540] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:54.096 [2024-12-15 07:09:15.702698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.096 qpair failed and we were unable to recover it. 00:28:54.096 [2024-12-15 07:09:15.712416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.096 [2024-12-15 07:09:15.712460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.096 [2024-12-15 07:09:15.712478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.096 [2024-12-15 07:09:15.712488] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.096 [2024-12-15 07:09:15.712497] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:28:54.096 [2024-12-15 07:09:15.722897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.096 qpair failed and we were unable to recover it. 00:28:54.096 [2024-12-15 07:09:15.723066] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:54.355 [2024-12-15 07:09:15.757316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:54.355 Controller properly reset. 00:28:54.355 Initializing NVMe Controllers 00:28:54.355 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.355 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.355 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:54.355 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:54.355 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:54.355 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:54.355 Initialization complete. Launching workers. 00:28:54.355 Starting thread on core 1 00:28:54.355 Starting thread on core 2 00:28:54.355 Starting thread on core 3 00:28:54.355 Starting thread on core 0 00:28:54.355 07:09:15 -- host/target_disconnect.sh@59 -- # sync 00:28:54.355 00:28:54.355 real 0m12.597s 00:28:54.355 user 0m27.365s 00:28:54.355 sys 0m3.076s 00:28:54.355 07:09:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:54.355 07:09:15 -- common/autotest_common.sh@10 -- # set +x 00:28:54.355 ************************************ 00:28:54.355 END TEST nvmf_target_disconnect_tc2 00:28:54.355 ************************************ 00:28:54.355 07:09:15 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:28:54.355 07:09:15 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:28:54.355 07:09:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:54.355 07:09:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:54.355 07:09:15 -- common/autotest_common.sh@10 -- # set +x 00:28:54.355 ************************************ 00:28:54.355 START TEST nvmf_target_disconnect_tc3 00:28:54.355 ************************************ 00:28:54.355 07:09:15 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc3 00:28:54.355 07:09:15 -- host/target_disconnect.sh@65 -- # reconnectpid=1506888 00:28:54.355 07:09:15 -- host/target_disconnect.sh@67 -- # sleep 2 00:28:54.355 07:09:15 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:28:54.355 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.258 07:09:17 -- host/target_disconnect.sh@68 -- # kill -9 1505614 00:28:56.258 07:09:17 -- host/target_disconnect.sh@70 -- # sleep 2 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Write completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 Read completed with error (sct=0, sc=8) 00:28:57.636 starting I/O failed 00:28:57.636 [2024-12-15 07:09:19.039254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.572 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 1505614 Killed "${NVMF_APP[@]}" "$@" 00:28:58.572 07:09:19 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:28:58.572 07:09:19 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:58.572 07:09:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:58.572 07:09:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:58.572 07:09:19 -- common/autotest_common.sh@10 -- # set +x 00:28:58.572 07:09:19 -- nvmf/common.sh@469 -- # nvmfpid=1507545 00:28:58.572 07:09:19 -- nvmf/common.sh@470 -- # waitforlisten 1507545 00:28:58.572 07:09:19 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:58.572 07:09:19 -- common/autotest_common.sh@829 -- # '[' -z 1507545 ']' 00:28:58.572 07:09:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.572 07:09:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.572 07:09:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.572 07:09:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.572 07:09:19 -- common/autotest_common.sh@10 -- # set +x 00:28:58.572 [2024-12-15 07:09:19.921160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:58.572 [2024-12-15 07:09:19.921214] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.572 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.572 [2024-12-15 07:09:20.007182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Write completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 Read completed with error (sct=0, sc=8) 00:28:58.572 starting I/O failed 00:28:58.572 [2024-12-15 07:09:20.044227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.572 [2024-12-15 07:09:20.047187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:58.572 [2024-12-15 07:09:20.047290] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.572 [2024-12-15 07:09:20.047304] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.572 [2024-12-15 07:09:20.047313] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.572 [2024-12-15 07:09:20.047429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:58.572 [2024-12-15 07:09:20.047539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:58.572 [2024-12-15 07:09:20.047647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:58.572 [2024-12-15 07:09:20.047649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:59.139 07:09:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.139 07:09:20 -- common/autotest_common.sh@862 -- # return 0 00:28:59.139 07:09:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:59.139 07:09:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:59.139 07:09:20 -- common/autotest_common.sh@10 -- # set +x 00:28:59.399 07:09:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.399 07:09:20 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:59.399 07:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.399 07:09:20 -- common/autotest_common.sh@10 -- # set +x 00:28:59.399 Malloc0 00:28:59.399 07:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.399 07:09:20 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:59.399 07:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.399 07:09:20 -- common/autotest_common.sh@10 -- # set +x 00:28:59.399 [2024-12-15 07:09:20.842152] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xca6ab0/0xcb2580) succeed. 00:28:59.399 [2024-12-15 07:09:20.851668] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xca8050/0xcf3c20) succeed. 00:28:59.399 07:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.399 07:09:20 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:59.399 07:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.399 07:09:20 -- common/autotest_common.sh@10 -- # set +x 00:28:59.399 07:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.399 07:09:20 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:59.399 07:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.399 07:09:20 -- common/autotest_common.sh@10 -- # set +x 00:28:59.399 07:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.399 07:09:20 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:28:59.399 07:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.399 07:09:20 -- common/autotest_common.sh@10 -- # set +x 00:28:59.399 [2024-12-15 07:09:20.994617] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:28:59.399 07:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.399 07:09:20 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:28:59.399 07:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.399 07:09:20 -- common/autotest_common.sh@10 -- # set +x 00:28:59.399 07:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.399 07:09:21 -- host/target_disconnect.sh@73 -- # wait 1506888 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Write completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 Read completed with error (sct=0, sc=8) 00:28:59.658 starting I/O failed 00:28:59.658 [2024-12-15 07:09:21.049226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.658 [2024-12-15 07:09:21.050787] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:59.658 [2024-12-15 07:09:21.050808] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:59.658 [2024-12-15 07:09:21.050824] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:00.594 [2024-12-15 07:09:22.054714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.594 qpair failed and we were unable to recover it. 00:29:00.594 [2024-12-15 07:09:22.056203] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:00.594 [2024-12-15 07:09:22.056221] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:00.594 [2024-12-15 07:09:22.056229] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:01.531 [2024-12-15 07:09:23.060188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.531 qpair failed and we were unable to recover it. 00:29:01.531 [2024-12-15 07:09:23.061620] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:01.531 [2024-12-15 07:09:23.061637] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:01.531 [2024-12-15 07:09:23.061644] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:02.467 [2024-12-15 07:09:24.065515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:02.467 qpair failed and we were unable to recover it. 00:29:02.467 [2024-12-15 07:09:24.067023] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:02.467 [2024-12-15 07:09:24.067040] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:02.467 [2024-12-15 07:09:24.067048] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:03.841 [2024-12-15 07:09:25.071044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:03.841 qpair failed and we were unable to recover it. 00:29:03.841 [2024-12-15 07:09:25.072484] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:03.841 [2024-12-15 07:09:25.072504] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:03.842 [2024-12-15 07:09:25.072512] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:04.776 [2024-12-15 07:09:26.076409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.776 qpair failed and we were unable to recover it. 00:29:04.776 [2024-12-15 07:09:26.077836] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:04.776 [2024-12-15 07:09:26.077852] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:04.776 [2024-12-15 07:09:26.077860] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:05.711 [2024-12-15 07:09:27.081702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:05.711 qpair failed and we were unable to recover it. 00:29:05.711 [2024-12-15 07:09:27.083267] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:05.711 [2024-12-15 07:09:27.083285] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:05.711 [2024-12-15 07:09:27.083292] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:06.647 [2024-12-15 07:09:28.087103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:06.647 qpair failed and we were unable to recover it. 00:29:06.647 [2024-12-15 07:09:28.088735] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:06.647 [2024-12-15 07:09:28.088758] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:06.647 [2024-12-15 07:09:28.088766] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:07.583 [2024-12-15 07:09:29.092531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-12-15 07:09:29.093985] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:07.583 [2024-12-15 07:09:29.094002] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:07.583 [2024-12-15 07:09:29.094009] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:08.518 [2024-12-15 07:09:30.097935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-12-15 07:09:30.098013] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:08.518 A controller has encountered a failure and is being reset. 00:29:08.518 Resorting to new failover address 192.168.100.9 00:29:08.518 [2024-12-15 07:09:30.098114] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.518 [2024-12-15 07:09:30.098191] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.518 [2024-12-15 07:09:30.100107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:08.518 Controller properly reset. 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Write completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 Read completed with error (sct=0, sc=8) 00:29:09.895 starting I/O failed 00:29:09.895 [2024-12-15 07:09:31.143539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.895 Initializing NVMe Controllers 00:29:09.895 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:09.895 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:09.895 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:09.895 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:09.895 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:09.895 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:09.895 Initialization complete. Launching workers. 00:29:09.895 Starting thread on core 1 00:29:09.895 Starting thread on core 2 00:29:09.895 Starting thread on core 3 00:29:09.895 Starting thread on core 0 00:29:09.895 07:09:31 -- host/target_disconnect.sh@74 -- # sync 00:29:09.895 00:29:09.895 real 0m15.339s 00:29:09.895 user 0m56.452s 00:29:09.895 sys 0m4.735s 00:29:09.895 07:09:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:09.895 07:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:09.895 ************************************ 00:29:09.895 END TEST nvmf_target_disconnect_tc3 00:29:09.895 ************************************ 00:29:09.895 07:09:31 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:09.895 07:09:31 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:29:09.895 07:09:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:09.895 07:09:31 -- nvmf/common.sh@116 -- # sync 00:29:09.895 07:09:31 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:09.895 07:09:31 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:09.895 07:09:31 -- nvmf/common.sh@119 -- # set +e 00:29:09.895 07:09:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:09.895 07:09:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:09.895 rmmod nvme_rdma 00:29:09.895 rmmod nvme_fabrics 00:29:09.895 07:09:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:09.895 07:09:31 -- nvmf/common.sh@123 -- # set -e 00:29:09.895 07:09:31 -- nvmf/common.sh@124 -- # return 0 00:29:09.895 07:09:31 -- nvmf/common.sh@477 -- # '[' -n 1507545 ']' 00:29:09.895 07:09:31 -- nvmf/common.sh@478 -- # killprocess 1507545 00:29:09.895 07:09:31 -- common/autotest_common.sh@936 -- # '[' -z 1507545 ']' 00:29:09.895 07:09:31 -- common/autotest_common.sh@940 -- # kill -0 1507545 00:29:09.895 07:09:31 -- common/autotest_common.sh@941 -- # uname 00:29:09.895 07:09:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:09.895 07:09:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1507545 00:29:09.895 07:09:31 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:29:09.895 07:09:31 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:29:09.895 07:09:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1507545' 00:29:09.895 killing process with pid 1507545 00:29:09.895 07:09:31 -- common/autotest_common.sh@955 -- # kill 1507545 00:29:09.895 07:09:31 -- common/autotest_common.sh@960 -- # wait 1507545 00:29:10.155 07:09:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:10.155 07:09:31 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:10.155 00:29:10.155 real 0m36.306s 00:29:10.155 user 2m12.399s 00:29:10.155 sys 0m13.511s 00:29:10.155 07:09:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:10.155 07:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:10.155 ************************************ 00:29:10.155 END TEST nvmf_target_disconnect 00:29:10.155 ************************************ 00:29:10.155 07:09:31 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:29:10.155 07:09:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:10.155 07:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:10.155 07:09:31 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:10.155 00:29:10.155 real 21m7.217s 00:29:10.155 user 67m41.115s 00:29:10.155 sys 4m54.553s 00:29:10.155 07:09:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:10.155 07:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:10.155 ************************************ 00:29:10.155 END TEST nvmf_rdma 00:29:10.155 ************************************ 00:29:10.155 07:09:31 -- spdk/autotest.sh@280 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:10.155 07:09:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:10.155 07:09:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:10.155 07:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:10.155 ************************************ 00:29:10.155 START TEST spdkcli_nvmf_rdma 00:29:10.155 ************************************ 00:29:10.155 07:09:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:10.414 * Looking for test storage... 00:29:10.414 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:29:10.414 07:09:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:10.414 07:09:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:10.414 07:09:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:10.414 07:09:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:10.414 07:09:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:10.414 07:09:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:10.414 07:09:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:10.414 07:09:31 -- scripts/common.sh@335 -- # IFS=.-: 00:29:10.414 07:09:31 -- scripts/common.sh@335 -- # read -ra ver1 00:29:10.415 07:09:31 -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.415 07:09:31 -- scripts/common.sh@336 -- # read -ra ver2 00:29:10.415 07:09:31 -- scripts/common.sh@337 -- # local 'op=<' 00:29:10.415 07:09:31 -- scripts/common.sh@339 -- # ver1_l=2 00:29:10.415 07:09:31 -- scripts/common.sh@340 -- # ver2_l=1 00:29:10.415 07:09:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:10.415 07:09:31 -- scripts/common.sh@343 -- # case "$op" in 00:29:10.415 07:09:31 -- scripts/common.sh@344 -- # : 1 00:29:10.415 07:09:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:10.415 07:09:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.415 07:09:31 -- scripts/common.sh@364 -- # decimal 1 00:29:10.415 07:09:31 -- scripts/common.sh@352 -- # local d=1 00:29:10.415 07:09:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.415 07:09:31 -- scripts/common.sh@354 -- # echo 1 00:29:10.415 07:09:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:10.415 07:09:31 -- scripts/common.sh@365 -- # decimal 2 00:29:10.415 07:09:31 -- scripts/common.sh@352 -- # local d=2 00:29:10.415 07:09:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.415 07:09:31 -- scripts/common.sh@354 -- # echo 2 00:29:10.415 07:09:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:10.415 07:09:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:10.415 07:09:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:10.415 07:09:31 -- scripts/common.sh@367 -- # return 0 00:29:10.415 07:09:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.415 07:09:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:10.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.415 --rc genhtml_branch_coverage=1 00:29:10.415 --rc genhtml_function_coverage=1 00:29:10.415 --rc genhtml_legend=1 00:29:10.415 --rc geninfo_all_blocks=1 00:29:10.415 --rc geninfo_unexecuted_blocks=1 00:29:10.415 00:29:10.415 ' 00:29:10.415 07:09:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:10.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.415 --rc genhtml_branch_coverage=1 00:29:10.415 --rc genhtml_function_coverage=1 00:29:10.415 --rc genhtml_legend=1 00:29:10.415 --rc geninfo_all_blocks=1 00:29:10.415 --rc geninfo_unexecuted_blocks=1 00:29:10.415 00:29:10.415 ' 00:29:10.415 07:09:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:10.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.415 --rc genhtml_branch_coverage=1 00:29:10.415 --rc genhtml_function_coverage=1 00:29:10.415 --rc genhtml_legend=1 00:29:10.415 --rc geninfo_all_blocks=1 00:29:10.415 --rc geninfo_unexecuted_blocks=1 00:29:10.415 00:29:10.415 ' 00:29:10.415 07:09:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:10.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.415 --rc genhtml_branch_coverage=1 00:29:10.415 --rc genhtml_function_coverage=1 00:29:10.415 --rc genhtml_legend=1 00:29:10.415 --rc geninfo_all_blocks=1 00:29:10.415 --rc geninfo_unexecuted_blocks=1 00:29:10.415 00:29:10.415 ' 00:29:10.415 07:09:31 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:29:10.415 07:09:31 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:10.415 07:09:31 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:29:10.415 07:09:31 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.415 07:09:31 -- nvmf/common.sh@7 -- # uname -s 00:29:10.415 07:09:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.415 07:09:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.415 07:09:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.415 07:09:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.415 07:09:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.415 07:09:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.415 07:09:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.415 07:09:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.415 07:09:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.415 07:09:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.415 07:09:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:10.415 07:09:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:10.415 07:09:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.415 07:09:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.415 07:09:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:10.415 07:09:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:10.415 07:09:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.415 07:09:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.415 07:09:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.415 07:09:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.415 07:09:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.415 07:09:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.415 07:09:31 -- paths/export.sh@5 -- # export PATH 00:29:10.415 07:09:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.415 07:09:31 -- nvmf/common.sh@46 -- # : 0 00:29:10.415 07:09:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:10.415 07:09:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:10.415 07:09:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:10.415 07:09:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.415 07:09:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.415 07:09:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:10.415 07:09:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:10.415 07:09:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:10.415 07:09:31 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:10.415 07:09:31 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:10.415 07:09:31 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:10.415 07:09:31 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:10.415 07:09:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:10.415 07:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:10.415 07:09:31 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:10.415 07:09:31 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1509708 00:29:10.415 07:09:31 -- spdkcli/common.sh@34 -- # waitforlisten 1509708 00:29:10.415 07:09:31 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:10.415 07:09:31 -- common/autotest_common.sh@829 -- # '[' -z 1509708 ']' 00:29:10.415 07:09:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.415 07:09:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:10.415 07:09:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.415 07:09:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:10.415 07:09:31 -- common/autotest_common.sh@10 -- # set +x 00:29:10.415 [2024-12-15 07:09:32.025767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:10.415 [2024-12-15 07:09:32.025827] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509708 ] 00:29:10.674 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.674 [2024-12-15 07:09:32.097118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:10.674 [2024-12-15 07:09:32.135437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:10.674 [2024-12-15 07:09:32.135604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.674 [2024-12-15 07:09:32.135606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.242 07:09:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:11.242 07:09:32 -- common/autotest_common.sh@862 -- # return 0 00:29:11.242 07:09:32 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:11.242 07:09:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:11.242 07:09:32 -- common/autotest_common.sh@10 -- # set +x 00:29:11.242 07:09:32 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:11.242 07:09:32 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:29:11.242 07:09:32 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:29:11.242 07:09:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:11.242 07:09:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.242 07:09:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:11.242 07:09:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:11.242 07:09:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:11.242 07:09:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.500 07:09:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:11.500 07:09:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.500 07:09:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:11.500 07:09:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:11.500 07:09:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:11.500 07:09:32 -- common/autotest_common.sh@10 -- # set +x 00:29:18.065 07:09:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:18.065 07:09:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:18.065 07:09:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:18.065 07:09:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:18.065 07:09:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:18.065 07:09:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:18.065 07:09:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:18.065 07:09:39 -- nvmf/common.sh@294 -- # net_devs=() 00:29:18.065 07:09:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:18.065 07:09:39 -- nvmf/common.sh@295 -- # e810=() 00:29:18.065 07:09:39 -- nvmf/common.sh@295 -- # local -ga e810 00:29:18.065 07:09:39 -- nvmf/common.sh@296 -- # x722=() 00:29:18.065 07:09:39 -- nvmf/common.sh@296 -- # local -ga x722 00:29:18.065 07:09:39 -- nvmf/common.sh@297 -- # mlx=() 00:29:18.065 07:09:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:18.065 07:09:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.065 07:09:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.065 07:09:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.065 07:09:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.065 07:09:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.065 07:09:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.065 07:09:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.065 07:09:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.065 07:09:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.065 07:09:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.065 07:09:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.065 07:09:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:18.065 07:09:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:18.065 07:09:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:18.065 07:09:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:18.065 07:09:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:18.065 07:09:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:18.065 07:09:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:18.065 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:18.065 07:09:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:18.065 07:09:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:18.065 07:09:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:18.065 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:18.065 07:09:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:18.065 07:09:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:18.065 07:09:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:18.065 07:09:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.065 07:09:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:18.065 07:09:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.065 07:09:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:18.065 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:18.065 07:09:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.065 07:09:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:18.065 07:09:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.065 07:09:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:18.065 07:09:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.065 07:09:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:18.065 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:18.065 07:09:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.065 07:09:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:18.065 07:09:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:18.065 07:09:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:18.065 07:09:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:18.065 07:09:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:18.065 07:09:39 -- nvmf/common.sh@57 -- # uname 00:29:18.065 07:09:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:18.065 07:09:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:18.065 07:09:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:18.065 07:09:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:18.065 07:09:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:18.065 07:09:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:18.065 07:09:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:18.065 07:09:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:18.066 07:09:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:18.066 07:09:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:18.066 07:09:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:18.066 07:09:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:18.066 07:09:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:18.066 07:09:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:18.066 07:09:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:18.066 07:09:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:18.066 07:09:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:18.066 07:09:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:18.066 07:09:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:18.066 07:09:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:18.066 07:09:39 -- nvmf/common.sh@104 -- # continue 2 00:29:18.066 07:09:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:18.066 07:09:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:18.066 07:09:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:18.066 07:09:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:18.066 07:09:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:18.066 07:09:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:18.066 07:09:39 -- nvmf/common.sh@104 -- # continue 2 00:29:18.066 07:09:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:18.066 07:09:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:18.066 07:09:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:18.066 07:09:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:18.066 07:09:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:18.066 07:09:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:18.066 07:09:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:18.066 07:09:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:18.066 07:09:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:18.066 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:18.066 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:18.066 altname enp217s0f0np0 00:29:18.066 altname ens818f0np0 00:29:18.066 inet 192.168.100.8/24 scope global mlx_0_0 00:29:18.066 valid_lft forever preferred_lft forever 00:29:18.066 07:09:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:18.066 07:09:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:18.066 07:09:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:18.066 07:09:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:18.066 07:09:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:18.066 07:09:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:18.066 07:09:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:18.066 07:09:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:18.066 07:09:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:18.066 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:18.066 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:18.066 altname enp217s0f1np1 00:29:18.066 altname ens818f1np1 00:29:18.066 inet 192.168.100.9/24 scope global mlx_0_1 00:29:18.066 valid_lft forever preferred_lft forever 00:29:18.066 07:09:39 -- nvmf/common.sh@410 -- # return 0 00:29:18.066 07:09:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:18.066 07:09:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:18.066 07:09:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:18.066 07:09:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:18.066 07:09:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:18.066 07:09:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:18.066 07:09:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:18.066 07:09:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:18.066 07:09:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:18.066 07:09:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:18.066 07:09:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:18.066 07:09:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:18.066 07:09:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:18.066 07:09:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:18.066 07:09:39 -- nvmf/common.sh@104 -- # continue 2 00:29:18.066 07:09:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:18.066 07:09:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:18.066 07:09:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:18.066 07:09:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:18.066 07:09:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:18.066 07:09:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:18.066 07:09:39 -- nvmf/common.sh@104 -- # continue 2 00:29:18.066 07:09:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:18.066 07:09:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:18.066 07:09:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:18.066 07:09:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:18.066 07:09:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:18.066 07:09:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:18.066 07:09:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:18.066 07:09:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:18.066 07:09:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:18.066 07:09:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:18.066 07:09:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:18.066 07:09:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:18.066 07:09:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:18.066 192.168.100.9' 00:29:18.066 07:09:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:18.066 192.168.100.9' 00:29:18.066 07:09:39 -- nvmf/common.sh@445 -- # head -n 1 00:29:18.066 07:09:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:18.066 07:09:39 -- nvmf/common.sh@446 -- # tail -n +2 00:29:18.066 07:09:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:18.066 192.168.100.9' 00:29:18.066 07:09:39 -- nvmf/common.sh@446 -- # head -n 1 00:29:18.066 07:09:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:18.066 07:09:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:18.066 07:09:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:18.066 07:09:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:18.066 07:09:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:18.066 07:09:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:18.066 07:09:39 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:29:18.066 07:09:39 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:18.066 07:09:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:18.066 07:09:39 -- common/autotest_common.sh@10 -- # set +x 00:29:18.325 07:09:39 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:18.325 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:18.325 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:18.325 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:18.325 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:18.325 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:18.325 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:18.325 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:18.325 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:18.325 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:18.325 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:18.325 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:18.325 ' 00:29:18.584 [2024-12-15 07:09:40.057579] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:21.118 [2024-12-15 07:09:42.352148] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d4e930/0x1d51180) succeed. 00:29:21.118 [2024-12-15 07:09:42.362675] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d4ffc0/0x1d92820) succeed. 00:29:22.495 [2024-12-15 07:09:43.732475] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:29:25.028 [2024-12-15 07:09:46.176367] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:29:26.931 [2024-12-15 07:09:48.287317] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:29:28.305 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:28.305 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:28.305 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:28.305 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:28.305 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:28.305 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:28.305 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:28.305 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:28.305 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:28.305 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:28.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:28.305 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:28.564 07:09:50 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:28.564 07:09:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:28.564 07:09:50 -- common/autotest_common.sh@10 -- # set +x 00:29:28.564 07:09:50 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:28.564 07:09:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:28.564 07:09:50 -- common/autotest_common.sh@10 -- # set +x 00:29:28.564 07:09:50 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:28.564 07:09:50 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:28.821 07:09:50 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:29.079 07:09:50 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:29.079 07:09:50 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:29.079 07:09:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:29.079 07:09:50 -- common/autotest_common.sh@10 -- # set +x 00:29:29.079 07:09:50 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:29.079 07:09:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:29.080 07:09:50 -- common/autotest_common.sh@10 -- # set +x 00:29:29.080 07:09:50 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:29.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:29.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:29.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:29.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:29:29.080 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:29:29.080 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:29.080 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:29.080 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:29.080 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:29.080 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:29.080 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:29.080 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:29.080 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:29.080 ' 00:29:34.483 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:34.483 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:34.483 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:34.483 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:34.483 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:29:34.483 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:29:34.483 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:34.483 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:34.483 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:34.483 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:34.483 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:34.483 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:34.483 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:34.483 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:34.483 07:09:55 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:34.483 07:09:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:34.483 07:09:55 -- common/autotest_common.sh@10 -- # set +x 00:29:34.483 07:09:55 -- spdkcli/nvmf.sh@90 -- # killprocess 1509708 00:29:34.483 07:09:55 -- common/autotest_common.sh@936 -- # '[' -z 1509708 ']' 00:29:34.484 07:09:55 -- common/autotest_common.sh@940 -- # kill -0 1509708 00:29:34.484 07:09:55 -- common/autotest_common.sh@941 -- # uname 00:29:34.484 07:09:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:34.484 07:09:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1509708 00:29:34.484 07:09:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:34.484 07:09:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:34.484 07:09:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1509708' 00:29:34.484 killing process with pid 1509708 00:29:34.484 07:09:55 -- common/autotest_common.sh@955 -- # kill 1509708 00:29:34.484 [2024-12-15 07:09:55.620481] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:34.484 07:09:55 -- common/autotest_common.sh@960 -- # wait 1509708 00:29:34.484 07:09:55 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:29:34.484 07:09:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:34.484 07:09:55 -- nvmf/common.sh@116 -- # sync 00:29:34.484 07:09:55 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:34.484 07:09:55 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:34.484 07:09:55 -- nvmf/common.sh@119 -- # set +e 00:29:34.484 07:09:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:34.484 07:09:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:34.484 rmmod nvme_rdma 00:29:34.484 rmmod nvme_fabrics 00:29:34.484 07:09:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:34.484 07:09:55 -- nvmf/common.sh@123 -- # set -e 00:29:34.484 07:09:55 -- nvmf/common.sh@124 -- # return 0 00:29:34.484 07:09:55 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:29:34.484 07:09:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:34.484 07:09:55 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:34.484 00:29:34.484 real 0m24.146s 00:29:34.484 user 0m52.524s 00:29:34.484 sys 0m6.140s 00:29:34.484 07:09:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:34.484 07:09:55 -- common/autotest_common.sh@10 -- # set +x 00:29:34.484 ************************************ 00:29:34.484 END TEST spdkcli_nvmf_rdma 00:29:34.484 ************************************ 00:29:34.484 07:09:55 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:29:34.484 07:09:55 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:29:34.484 07:09:55 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:34.484 07:09:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:34.484 07:09:55 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:34.484 07:09:55 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:34.484 07:09:55 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:34.484 07:09:55 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:34.484 07:09:55 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:34.484 07:09:55 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:34.484 07:09:55 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:34.484 07:09:55 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:29:34.484 07:09:55 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:29:34.484 07:09:55 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:34.484 07:09:55 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:34.484 07:09:55 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:29:34.484 07:09:55 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:29:34.484 07:09:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:34.484 07:09:55 -- common/autotest_common.sh@10 -- # set +x 00:29:34.484 07:09:55 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:29:34.484 07:09:55 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:29:34.484 07:09:55 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:29:34.484 07:09:55 -- common/autotest_common.sh@10 -- # set +x 00:29:41.057 INFO: APP EXITING 00:29:41.057 INFO: killing all VMs 00:29:41.057 INFO: killing vhost app 00:29:41.057 INFO: EXIT DONE 00:29:42.963 Waiting for block devices as requested 00:29:43.222 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:43.222 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:43.223 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:43.223 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:43.482 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:43.482 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:43.482 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:43.482 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:43.741 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:43.741 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:43.741 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:44.001 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:44.001 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:44.001 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:44.260 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:44.260 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:44.260 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:29:47.555 Cleaning 00:29:47.555 Removing: /var/run/dpdk/spdk0/config 00:29:47.555 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:47.555 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:47.555 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:47.555 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:47.555 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:47.555 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:47.555 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:47.555 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:47.555 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:47.555 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:47.555 Removing: /var/run/dpdk/spdk1/config 00:29:47.555 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:47.555 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:47.555 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:47.555 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:47.555 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:47.555 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:47.555 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:47.555 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:47.555 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:47.555 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:47.555 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:47.555 Removing: /var/run/dpdk/spdk2/config 00:29:47.555 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:47.555 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:47.555 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:47.555 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:47.555 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:47.555 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:47.555 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:47.555 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:47.555 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:47.555 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:47.555 Removing: /var/run/dpdk/spdk3/config 00:29:47.555 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:47.555 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:47.555 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:47.555 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:47.555 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:47.555 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:47.555 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:47.555 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:47.555 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:47.555 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:47.555 Removing: /var/run/dpdk/spdk4/config 00:29:47.555 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:47.555 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:47.555 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:47.555 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:47.555 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:47.555 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:47.555 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:47.555 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:47.555 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:47.555 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:47.555 Removing: /dev/shm/bdevperf_trace.pid1340305 00:29:47.555 Removing: /dev/shm/bdevperf_trace.pid1433971 00:29:47.555 Removing: /dev/shm/bdev_svc_trace.1 00:29:47.555 Removing: /dev/shm/nvmf_trace.0 00:29:47.555 Removing: /dev/shm/spdk_tgt_trace.pid1176646 00:29:47.555 Removing: /var/run/dpdk/spdk0 00:29:47.555 Removing: /var/run/dpdk/spdk1 00:29:47.555 Removing: /var/run/dpdk/spdk2 00:29:47.555 Removing: /var/run/dpdk/spdk3 00:29:47.555 Removing: /var/run/dpdk/spdk4 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1173936 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1175222 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1176646 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1177277 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1182353 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1183836 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1184170 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1184500 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1184849 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1185189 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1185477 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1185759 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1186027 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1186755 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1189887 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1190283 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1190738 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1190759 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1191327 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1191596 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1192003 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1192181 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1192479 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1192685 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1192795 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1193061 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1193460 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1193726 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1194061 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1194360 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1194384 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1194553 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1194716 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1195001 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1195277 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1195565 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1195833 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1196052 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1196202 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1196424 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1196693 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1196976 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1197248 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1197531 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1197738 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1197931 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1198109 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1198391 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1198659 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1198946 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1199214 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1199447 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1199606 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1199808 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1200079 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1200360 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1200685 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1201040 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1201248 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1201703 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1202040 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1202323 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1202589 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1202878 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1203147 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1203426 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1203594 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1203779 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1204018 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1204304 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1204575 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1204860 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1204937 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1205277 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1209297 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1305953 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1310231 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1320650 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1326628 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1330112 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1330927 00:29:47.555 Removing: /var/run/dpdk/spdk_pid1340305 00:29:47.556 Removing: /var/run/dpdk/spdk_pid1340629 00:29:47.556 Removing: /var/run/dpdk/spdk_pid1344699 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1350642 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1353370 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1363473 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1388744 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1392333 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1397560 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1431870 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1432835 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1433971 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1438226 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1445346 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1446249 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1447142 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1448133 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1448486 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1452952 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1452961 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1457530 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1458074 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1458619 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1459418 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1459428 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1461864 00:29:47.815 Removing: /var/run/dpdk/spdk_pid1463832 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1466177 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1468101 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1470031 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1471994 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1478287 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1478835 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1481163 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1482225 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1489318 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1492279 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1497771 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1498047 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1504070 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1504517 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1506888 00:29:47.816 Removing: /var/run/dpdk/spdk_pid1509708 00:29:47.816 Clean 00:29:47.816 killing process with pid 1123480 00:30:05.917 killing process with pid 1123477 00:30:05.917 killing process with pid 1123479 00:30:05.917 killing process with pid 1123478 00:30:05.917 07:10:25 -- common/autotest_common.sh@1446 -- # return 0 00:30:05.917 07:10:25 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:30:05.917 07:10:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:05.918 07:10:25 -- common/autotest_common.sh@10 -- # set +x 00:30:05.918 07:10:25 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:30:05.918 07:10:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:05.918 07:10:25 -- common/autotest_common.sh@10 -- # set +x 00:30:05.918 07:10:25 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:05.918 07:10:25 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:30:05.918 07:10:25 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:30:05.918 07:10:25 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:30:05.918 07:10:25 -- spdk/autotest.sh@383 -- # hostname 00:30:05.918 07:10:25 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:30:05.918 geninfo: WARNING: invalid characters removed from testname! 00:30:20.808 07:10:41 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:22.189 07:10:43 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:24.096 07:10:45 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:25.478 07:10:46 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:26.858 07:10:48 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:28.767 07:10:49 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:30.147 07:10:51 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:30.147 07:10:51 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:30:30.147 07:10:51 -- common/autotest_common.sh@1690 -- $ lcov --version 00:30:30.147 07:10:51 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:30:30.147 07:10:51 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:30:30.147 07:10:51 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:30:30.147 07:10:51 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:30:30.147 07:10:51 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:30:30.147 07:10:51 -- scripts/common.sh@335 -- $ IFS=.-: 00:30:30.147 07:10:51 -- scripts/common.sh@335 -- $ read -ra ver1 00:30:30.147 07:10:51 -- scripts/common.sh@336 -- $ IFS=.-: 00:30:30.147 07:10:51 -- scripts/common.sh@336 -- $ read -ra ver2 00:30:30.147 07:10:51 -- scripts/common.sh@337 -- $ local 'op=<' 00:30:30.147 07:10:51 -- scripts/common.sh@339 -- $ ver1_l=2 00:30:30.147 07:10:51 -- scripts/common.sh@340 -- $ ver2_l=1 00:30:30.147 07:10:51 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:30:30.147 07:10:51 -- scripts/common.sh@343 -- $ case "$op" in 00:30:30.147 07:10:51 -- scripts/common.sh@344 -- $ : 1 00:30:30.147 07:10:51 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:30:30.147 07:10:51 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.147 07:10:51 -- scripts/common.sh@364 -- $ decimal 1 00:30:30.147 07:10:51 -- scripts/common.sh@352 -- $ local d=1 00:30:30.147 07:10:51 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:30:30.147 07:10:51 -- scripts/common.sh@354 -- $ echo 1 00:30:30.147 07:10:51 -- scripts/common.sh@364 -- $ ver1[v]=1 00:30:30.147 07:10:51 -- scripts/common.sh@365 -- $ decimal 2 00:30:30.147 07:10:51 -- scripts/common.sh@352 -- $ local d=2 00:30:30.147 07:10:51 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:30:30.147 07:10:51 -- scripts/common.sh@354 -- $ echo 2 00:30:30.147 07:10:51 -- scripts/common.sh@365 -- $ ver2[v]=2 00:30:30.147 07:10:51 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:30:30.147 07:10:51 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:30:30.147 07:10:51 -- scripts/common.sh@367 -- $ return 0 00:30:30.147 07:10:51 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.147 07:10:51 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:30:30.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.147 --rc genhtml_branch_coverage=1 00:30:30.147 --rc genhtml_function_coverage=1 00:30:30.147 --rc genhtml_legend=1 00:30:30.147 --rc geninfo_all_blocks=1 00:30:30.147 --rc geninfo_unexecuted_blocks=1 00:30:30.147 00:30:30.147 ' 00:30:30.147 07:10:51 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:30:30.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.147 --rc genhtml_branch_coverage=1 00:30:30.147 --rc genhtml_function_coverage=1 00:30:30.147 --rc genhtml_legend=1 00:30:30.147 --rc geninfo_all_blocks=1 00:30:30.147 --rc geninfo_unexecuted_blocks=1 00:30:30.148 00:30:30.148 ' 00:30:30.148 07:10:51 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:30:30.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.148 --rc genhtml_branch_coverage=1 00:30:30.148 --rc genhtml_function_coverage=1 00:30:30.148 --rc genhtml_legend=1 00:30:30.148 --rc geninfo_all_blocks=1 00:30:30.148 --rc geninfo_unexecuted_blocks=1 00:30:30.148 00:30:30.148 ' 00:30:30.148 07:10:51 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:30:30.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.148 --rc genhtml_branch_coverage=1 00:30:30.148 --rc genhtml_function_coverage=1 00:30:30.148 --rc genhtml_legend=1 00:30:30.148 --rc geninfo_all_blocks=1 00:30:30.148 --rc geninfo_unexecuted_blocks=1 00:30:30.148 00:30:30.148 ' 00:30:30.148 07:10:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:30.148 07:10:51 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:30.148 07:10:51 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.148 07:10:51 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.148 07:10:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.148 07:10:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.148 07:10:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.148 07:10:51 -- paths/export.sh@5 -- $ export PATH 00:30:30.148 07:10:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.148 07:10:51 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:30:30.148 07:10:51 -- common/autobuild_common.sh@440 -- $ date +%s 00:30:30.148 07:10:51 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734243051.XXXXXX 00:30:30.148 07:10:51 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734243051.g2NEBF 00:30:30.148 07:10:51 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:30:30.148 07:10:51 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:30:30.148 07:10:51 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:30:30.148 07:10:51 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:30:30.148 07:10:51 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:30.148 07:10:51 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:30.148 07:10:51 -- common/autobuild_common.sh@456 -- $ get_config_params 00:30:30.148 07:10:51 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:30:30.148 07:10:51 -- common/autotest_common.sh@10 -- $ set +x 00:30:30.148 07:10:51 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:30:30.148 07:10:51 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:30:30.148 07:10:51 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:30.148 07:10:51 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:30.148 07:10:51 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:30.148 07:10:51 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:30.148 07:10:51 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:30.148 07:10:51 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:30.148 07:10:51 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:30.148 07:10:51 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:30.148 07:10:51 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:30.148 + [[ -n 1069186 ]] 00:30:30.148 + sudo kill 1069186 00:30:30.418 [Pipeline] } 00:30:30.434 [Pipeline] // stage 00:30:30.440 [Pipeline] } 00:30:30.455 [Pipeline] // timeout 00:30:30.460 [Pipeline] } 00:30:30.475 [Pipeline] // catchError 00:30:30.481 [Pipeline] } 00:30:30.496 [Pipeline] // wrap 00:30:30.502 [Pipeline] } 00:30:30.515 [Pipeline] // catchError 00:30:30.525 [Pipeline] stage 00:30:30.528 [Pipeline] { (Epilogue) 00:30:30.542 [Pipeline] catchError 00:30:30.543 [Pipeline] { 00:30:30.557 [Pipeline] echo 00:30:30.559 Cleanup processes 00:30:30.566 [Pipeline] sh 00:30:30.856 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:30.856 1530924 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:30.870 [Pipeline] sh 00:30:31.157 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:31.157 ++ grep -v 'sudo pgrep' 00:30:31.157 ++ awk '{print $1}' 00:30:31.157 + sudo kill -9 00:30:31.157 + true 00:30:31.169 [Pipeline] sh 00:30:31.455 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:31.455 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:30:38.067 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:30:40.618 [Pipeline] sh 00:30:40.906 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:40.906 Artifacts sizes are good 00:30:40.920 [Pipeline] archiveArtifacts 00:30:40.928 Archiving artifacts 00:30:41.067 [Pipeline] sh 00:30:41.352 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:30:41.365 [Pipeline] cleanWs 00:30:41.375 [WS-CLEANUP] Deleting project workspace... 00:30:41.375 [WS-CLEANUP] Deferred wipeout is used... 00:30:41.382 [WS-CLEANUP] done 00:30:41.384 [Pipeline] } 00:30:41.400 [Pipeline] // catchError 00:30:41.431 [Pipeline] sh 00:30:41.746 + logger -p user.info -t JENKINS-CI 00:30:41.755 [Pipeline] } 00:30:41.766 [Pipeline] // stage 00:30:41.770 [Pipeline] } 00:30:41.783 [Pipeline] // node 00:30:41.787 [Pipeline] End of Pipeline 00:30:41.825 Finished: SUCCESS