00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 1057 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3724 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.075 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.075 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.077 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.105 Fetching changes from the remote Git repository 00:00:00.107 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.137 Using shallow fetch with depth 1 00:00:00.137 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.137 > git --version # timeout=10 00:00:00.168 > git --version # 'git version 2.39.2' 00:00:00.168 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.191 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.191 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.281 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.293 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.304 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.304 > git config core.sparsecheckout # timeout=10 00:00:07.314 > git read-tree -mu HEAD # timeout=10 00:00:07.329 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.353 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.353 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.458 [Pipeline] Start of Pipeline 00:00:07.468 [Pipeline] library 00:00:07.470 Loading library shm_lib@master 00:00:07.470 Library shm_lib@master is cached. Copying from home. 00:00:07.487 [Pipeline] node 00:00:07.498 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.500 [Pipeline] { 00:00:07.511 [Pipeline] catchError 00:00:07.513 [Pipeline] { 00:00:07.525 [Pipeline] wrap 00:00:07.534 [Pipeline] { 00:00:07.542 [Pipeline] stage 00:00:07.544 [Pipeline] { (Prologue) 00:00:07.741 [Pipeline] sh 00:00:08.028 + logger -p user.info -t JENKINS-CI 00:00:08.047 [Pipeline] echo 00:00:08.048 Node: WFP21 00:00:08.056 [Pipeline] sh 00:00:08.359 [Pipeline] setCustomBuildProperty 00:00:08.372 [Pipeline] echo 00:00:08.374 Cleanup processes 00:00:08.380 [Pipeline] sh 00:00:08.666 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.666 1071399 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.678 [Pipeline] sh 00:00:08.963 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.963 ++ grep -v 'sudo pgrep' 00:00:08.963 ++ awk '{print $1}' 00:00:08.963 + sudo kill -9 00:00:08.963 + true 00:00:08.978 [Pipeline] cleanWs 00:00:08.988 [WS-CLEANUP] Deleting project workspace... 00:00:08.988 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.995 [WS-CLEANUP] done 00:00:09.000 [Pipeline] setCustomBuildProperty 00:00:09.016 [Pipeline] sh 00:00:09.300 + sudo git config --global --replace-all safe.directory '*' 00:00:09.398 [Pipeline] httpRequest 00:00:10.158 [Pipeline] echo 00:00:10.160 Sorcerer 10.211.164.20 is alive 00:00:10.169 [Pipeline] retry 00:00:10.171 [Pipeline] { 00:00:10.185 [Pipeline] httpRequest 00:00:10.189 HttpMethod: GET 00:00:10.190 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.191 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.211 Response Code: HTTP/1.1 200 OK 00:00:10.212 Success: Status code 200 is in the accepted range: 200,404 00:00:10.212 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.056 [Pipeline] } 00:00:20.075 [Pipeline] // retry 00:00:20.082 [Pipeline] sh 00:00:20.370 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.387 [Pipeline] httpRequest 00:00:20.794 [Pipeline] echo 00:00:20.796 Sorcerer 10.211.164.20 is alive 00:00:20.805 [Pipeline] retry 00:00:20.807 [Pipeline] { 00:00:20.819 [Pipeline] httpRequest 00:00:20.824 HttpMethod: GET 00:00:20.824 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:20.826 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:20.844 Response Code: HTTP/1.1 200 OK 00:00:20.844 Success: Status code 200 is in the accepted range: 200,404 00:00:20.845 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:29.354 [Pipeline] } 00:01:29.372 [Pipeline] // retry 00:01:29.380 [Pipeline] sh 00:01:29.665 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:32.213 [Pipeline] sh 00:01:32.501 + git -C spdk log --oneline -n5 00:01:32.501 c13c99a5e test: Various fixes for Fedora40 00:01:32.501 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:32.501 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:32.501 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:32.501 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:32.519 [Pipeline] withCredentials 00:01:32.530 > git --version # timeout=10 00:01:32.542 > git --version # 'git version 2.39.2' 00:01:32.559 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:32.561 [Pipeline] { 00:01:32.572 [Pipeline] retry 00:01:32.574 [Pipeline] { 00:01:32.591 [Pipeline] sh 00:01:32.878 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:33.150 [Pipeline] } 00:01:33.167 [Pipeline] // retry 00:01:33.171 [Pipeline] } 00:01:33.186 [Pipeline] // withCredentials 00:01:33.194 [Pipeline] httpRequest 00:01:33.539 [Pipeline] echo 00:01:33.540 Sorcerer 10.211.164.20 is alive 00:01:33.550 [Pipeline] retry 00:01:33.552 [Pipeline] { 00:01:33.565 [Pipeline] httpRequest 00:01:33.569 HttpMethod: GET 00:01:33.570 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:33.571 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:33.577 Response Code: HTTP/1.1 200 OK 00:01:33.578 Success: Status code 200 is in the accepted range: 200,404 00:01:33.578 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:37.903 [Pipeline] } 00:01:37.921 [Pipeline] // retry 00:01:37.929 [Pipeline] sh 00:01:38.214 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:39.635 [Pipeline] sh 00:01:39.921 + git -C dpdk log --oneline -n5 00:01:39.921 caf0f5d395 version: 22.11.4 00:01:39.921 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:39.921 dc9c799c7d vhost: fix missing spinlock unlock 00:01:39.921 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:39.921 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:39.931 [Pipeline] } 00:01:39.944 [Pipeline] // stage 00:01:39.952 [Pipeline] stage 00:01:39.954 [Pipeline] { (Prepare) 00:01:39.972 [Pipeline] writeFile 00:01:39.986 [Pipeline] sh 00:01:40.270 + logger -p user.info -t JENKINS-CI 00:01:40.282 [Pipeline] sh 00:01:40.567 + logger -p user.info -t JENKINS-CI 00:01:40.580 [Pipeline] sh 00:01:40.865 + cat autorun-spdk.conf 00:01:40.865 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.865 SPDK_TEST_NVMF=1 00:01:40.865 SPDK_TEST_NVME_CLI=1 00:01:40.865 SPDK_TEST_NVMF_NICS=mlx5 00:01:40.865 SPDK_RUN_UBSAN=1 00:01:40.865 NET_TYPE=phy 00:01:40.865 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:40.865 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:40.873 RUN_NIGHTLY=1 00:01:40.877 [Pipeline] readFile 00:01:40.902 [Pipeline] withEnv 00:01:40.904 [Pipeline] { 00:01:40.917 [Pipeline] sh 00:01:41.213 + set -ex 00:01:41.213 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:41.213 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:41.213 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.213 ++ SPDK_TEST_NVMF=1 00:01:41.213 ++ SPDK_TEST_NVME_CLI=1 00:01:41.213 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:41.213 ++ SPDK_RUN_UBSAN=1 00:01:41.213 ++ NET_TYPE=phy 00:01:41.213 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.213 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:41.213 ++ RUN_NIGHTLY=1 00:01:41.213 + case $SPDK_TEST_NVMF_NICS in 00:01:41.213 + DRIVERS=mlx5_ib 00:01:41.213 + [[ -n mlx5_ib ]] 00:01:41.213 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:41.213 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:47.854 rmmod: ERROR: Module irdma is not currently loaded 00:01:47.854 rmmod: ERROR: Module i40iw is not currently loaded 00:01:47.854 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:47.854 + true 00:01:47.854 + for D in $DRIVERS 00:01:47.854 + sudo modprobe mlx5_ib 00:01:47.854 + exit 0 00:01:47.864 [Pipeline] } 00:01:47.879 [Pipeline] // withEnv 00:01:47.884 [Pipeline] } 00:01:47.898 [Pipeline] // stage 00:01:47.908 [Pipeline] catchError 00:01:47.909 [Pipeline] { 00:01:47.924 [Pipeline] timeout 00:01:47.924 Timeout set to expire in 1 hr 0 min 00:01:47.925 [Pipeline] { 00:01:47.939 [Pipeline] stage 00:01:47.941 [Pipeline] { (Tests) 00:01:47.954 [Pipeline] sh 00:01:48.241 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:48.241 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:48.241 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:48.241 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:48.241 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:48.241 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:48.241 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:48.241 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:48.241 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:48.241 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:48.241 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:48.241 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:48.241 + source /etc/os-release 00:01:48.241 ++ NAME='Fedora Linux' 00:01:48.241 ++ VERSION='39 (Cloud Edition)' 00:01:48.241 ++ ID=fedora 00:01:48.241 ++ VERSION_ID=39 00:01:48.241 ++ VERSION_CODENAME= 00:01:48.241 ++ PLATFORM_ID=platform:f39 00:01:48.241 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:48.241 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:48.241 ++ LOGO=fedora-logo-icon 00:01:48.241 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:48.241 ++ HOME_URL=https://fedoraproject.org/ 00:01:48.241 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:48.241 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:48.241 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:48.241 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:48.241 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:48.241 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:48.241 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:48.241 ++ SUPPORT_END=2024-11-12 00:01:48.241 ++ VARIANT='Cloud Edition' 00:01:48.241 ++ VARIANT_ID=cloud 00:01:48.241 + uname -a 00:01:48.241 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:48.241 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:51.533 Hugepages 00:01:51.533 node hugesize free / total 00:01:51.533 node0 1048576kB 0 / 0 00:01:51.533 node0 2048kB 0 / 0 00:01:51.533 node1 1048576kB 0 / 0 00:01:51.533 node1 2048kB 0 / 0 00:01:51.533 00:01:51.533 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:51.533 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:51.533 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:51.533 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:51.533 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:51.533 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:51.533 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:51.533 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:51.533 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:51.533 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:51.533 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:51.533 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:51.533 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:51.533 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:51.533 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:51.533 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:51.533 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:51.533 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:51.533 + rm -f /tmp/spdk-ld-path 00:01:51.533 + source autorun-spdk.conf 00:01:51.533 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.533 ++ SPDK_TEST_NVMF=1 00:01:51.533 ++ SPDK_TEST_NVME_CLI=1 00:01:51.533 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:51.533 ++ SPDK_RUN_UBSAN=1 00:01:51.533 ++ NET_TYPE=phy 00:01:51.533 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:51.533 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:51.533 ++ RUN_NIGHTLY=1 00:01:51.533 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:51.533 + [[ -n '' ]] 00:01:51.533 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:51.533 + for M in /var/spdk/build-*-manifest.txt 00:01:51.533 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:51.533 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:51.533 + for M in /var/spdk/build-*-manifest.txt 00:01:51.533 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:51.533 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:51.533 + for M in /var/spdk/build-*-manifest.txt 00:01:51.533 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:51.533 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:51.533 ++ uname 00:01:51.533 + [[ Linux == \L\i\n\u\x ]] 00:01:51.533 + sudo dmesg -T 00:01:51.533 + sudo dmesg --clear 00:01:51.533 + dmesg_pid=1072459 00:01:51.533 + [[ Fedora Linux == FreeBSD ]] 00:01:51.533 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.533 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.533 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:51.533 + [[ -x /usr/src/fio-static/fio ]] 00:01:51.533 + export FIO_BIN=/usr/src/fio-static/fio 00:01:51.533 + FIO_BIN=/usr/src/fio-static/fio 00:01:51.533 + sudo dmesg -Tw 00:01:51.533 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:51.533 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:51.533 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:51.533 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.533 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.533 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:51.533 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.533 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.533 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:51.533 Test configuration: 00:01:51.533 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.533 SPDK_TEST_NVMF=1 00:01:51.533 SPDK_TEST_NVME_CLI=1 00:01:51.533 SPDK_TEST_NVMF_NICS=mlx5 00:01:51.533 SPDK_RUN_UBSAN=1 00:01:51.533 NET_TYPE=phy 00:01:51.533 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:51.533 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:51.533 RUN_NIGHTLY=1 17:04:48 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:51.533 17:04:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:51.533 17:04:48 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:51.533 17:04:48 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:51.533 17:04:48 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:51.533 17:04:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.533 17:04:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.533 17:04:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.533 17:04:48 -- paths/export.sh@5 -- $ export PATH 00:01:51.533 17:04:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.533 17:04:48 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:51.533 17:04:48 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:51.533 17:04:48 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734192288.XXXXXX 00:01:51.533 17:04:48 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734192288.rIgRG2 00:01:51.533 17:04:48 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:51.533 17:04:48 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:01:51.533 17:04:48 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:51.533 17:04:48 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:01:51.533 17:04:48 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:51.533 17:04:48 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:51.534 17:04:48 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:51.534 17:04:48 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:51.534 17:04:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.534 17:04:48 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:01:51.534 17:04:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:51.534 17:04:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:51.534 17:04:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:51.534 17:04:48 -- spdk/autobuild.sh@16 -- $ date -u 00:01:51.534 Sat Dec 14 04:04:48 PM UTC 2024 00:01:51.534 17:04:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:51.534 LTS-67-gc13c99a5e 00:01:51.534 17:04:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:51.534 17:04:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:51.534 17:04:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:51.534 17:04:48 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:51.534 17:04:48 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:51.534 17:04:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.534 ************************************ 00:01:51.534 START TEST ubsan 00:01:51.534 ************************************ 00:01:51.534 17:04:48 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:51.534 using ubsan 00:01:51.534 00:01:51.534 real 0m0.000s 00:01:51.534 user 0m0.000s 00:01:51.534 sys 0m0.000s 00:01:51.534 17:04:48 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:51.534 17:04:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.534 ************************************ 00:01:51.534 END TEST ubsan 00:01:51.534 ************************************ 00:01:51.534 17:04:48 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:51.534 17:04:48 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:51.534 17:04:48 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:51.534 17:04:48 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:51.534 17:04:48 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:51.534 17:04:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.793 ************************************ 00:01:51.793 START TEST build_native_dpdk 00:01:51.793 ************************************ 00:01:51.793 17:04:48 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:01:51.793 17:04:48 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:51.793 17:04:48 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:51.793 17:04:48 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:51.793 17:04:48 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:51.793 17:04:48 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:51.793 17:04:48 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:51.793 17:04:48 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:51.793 17:04:48 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:51.793 17:04:48 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:51.793 17:04:48 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:51.793 17:04:48 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:51.793 17:04:48 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:51.793 17:04:48 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:51.793 17:04:48 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:51.793 17:04:48 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:51.793 17:04:48 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:51.793 17:04:48 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:51.793 17:04:48 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:51.793 17:04:48 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:51.793 17:04:48 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:51.793 caf0f5d395 version: 22.11.4 00:01:51.793 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:51.793 dc9c799c7d vhost: fix missing spinlock unlock 00:01:51.793 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:51.793 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:51.793 17:04:48 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:51.793 17:04:48 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:51.793 17:04:48 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:51.793 17:04:48 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:51.793 17:04:48 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:51.793 17:04:48 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:51.793 17:04:48 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:51.793 17:04:48 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:51.793 17:04:48 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:51.793 17:04:48 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:51.793 17:04:48 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:51.793 17:04:48 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:51.793 17:04:48 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:51.793 17:04:48 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:51.793 17:04:48 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:51.793 17:04:48 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:51.793 17:04:48 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:51.793 17:04:48 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:51.793 17:04:48 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:51.793 17:04:48 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:51.793 17:04:48 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:51.793 17:04:48 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:51.793 17:04:48 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:51.793 17:04:48 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:51.793 17:04:48 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:51.793 17:04:48 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:51.793 17:04:48 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:51.793 17:04:48 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:51.793 17:04:48 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:51.793 17:04:48 -- scripts/common.sh@343 -- $ case "$op" in 00:01:51.793 17:04:48 -- scripts/common.sh@344 -- $ : 1 00:01:51.793 17:04:48 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:51.793 17:04:48 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:51.793 17:04:48 -- scripts/common.sh@364 -- $ decimal 22 00:01:51.793 17:04:48 -- scripts/common.sh@352 -- $ local d=22 00:01:51.793 17:04:48 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:51.793 17:04:48 -- scripts/common.sh@354 -- $ echo 22 00:01:51.793 17:04:48 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:51.793 17:04:48 -- scripts/common.sh@365 -- $ decimal 21 00:01:51.793 17:04:48 -- scripts/common.sh@352 -- $ local d=21 00:01:51.793 17:04:48 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:51.793 17:04:48 -- scripts/common.sh@354 -- $ echo 21 00:01:51.793 17:04:48 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:51.793 17:04:48 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:51.793 17:04:48 -- scripts/common.sh@366 -- $ return 1 00:01:51.793 17:04:48 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:51.793 patching file config/rte_config.h 00:01:51.793 Hunk #1 succeeded at 60 (offset 1 line). 00:01:51.793 17:04:48 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:51.793 17:04:48 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:51.793 17:04:48 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:51.793 17:04:48 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:51.793 17:04:48 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:51.793 17:04:48 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:51.793 17:04:48 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:51.793 17:04:48 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:51.794 17:04:48 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:51.794 17:04:48 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:51.794 17:04:48 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:51.794 17:04:48 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:51.794 17:04:48 -- scripts/common.sh@343 -- $ case "$op" in 00:01:51.794 17:04:48 -- scripts/common.sh@344 -- $ : 1 00:01:51.794 17:04:48 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:51.794 17:04:48 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:51.794 17:04:48 -- scripts/common.sh@364 -- $ decimal 22 00:01:51.794 17:04:48 -- scripts/common.sh@352 -- $ local d=22 00:01:51.794 17:04:48 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:51.794 17:04:48 -- scripts/common.sh@354 -- $ echo 22 00:01:51.794 17:04:48 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:51.794 17:04:48 -- scripts/common.sh@365 -- $ decimal 24 00:01:51.794 17:04:48 -- scripts/common.sh@352 -- $ local d=24 00:01:51.794 17:04:48 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:51.794 17:04:48 -- scripts/common.sh@354 -- $ echo 24 00:01:51.794 17:04:48 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:51.794 17:04:48 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:51.794 17:04:48 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:51.794 17:04:48 -- scripts/common.sh@367 -- $ return 0 00:01:51.794 17:04:48 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:51.794 patching file lib/pcapng/rte_pcapng.c 00:01:51.794 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:51.794 17:04:48 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:51.794 17:04:48 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:51.794 17:04:48 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:51.794 17:04:48 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:51.794 17:04:48 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:57.071 The Meson build system 00:01:57.071 Version: 1.5.0 00:01:57.071 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:57.071 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:01:57.071 Build type: native build 00:01:57.071 Program cat found: YES (/usr/bin/cat) 00:01:57.071 Project name: DPDK 00:01:57.071 Project version: 22.11.4 00:01:57.071 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:57.071 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:57.071 Host machine cpu family: x86_64 00:01:57.071 Host machine cpu: x86_64 00:01:57.071 Message: ## Building in Developer Mode ## 00:01:57.071 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:57.071 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:57.071 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:57.071 Program objdump found: YES (/usr/bin/objdump) 00:01:57.071 Program python3 found: YES (/usr/bin/python3) 00:01:57.071 Program cat found: YES (/usr/bin/cat) 00:01:57.071 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:57.071 Checking for size of "void *" : 8 00:01:57.071 Checking for size of "void *" : 8 (cached) 00:01:57.071 Library m found: YES 00:01:57.071 Library numa found: YES 00:01:57.071 Has header "numaif.h" : YES 00:01:57.071 Library fdt found: NO 00:01:57.071 Library execinfo found: NO 00:01:57.071 Has header "execinfo.h" : YES 00:01:57.071 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:57.071 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:57.071 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:57.071 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:57.071 Run-time dependency openssl found: YES 3.1.1 00:01:57.071 Run-time dependency libpcap found: YES 1.10.4 00:01:57.071 Has header "pcap.h" with dependency libpcap: YES 00:01:57.071 Compiler for C supports arguments -Wcast-qual: YES 00:01:57.071 Compiler for C supports arguments -Wdeprecated: YES 00:01:57.071 Compiler for C supports arguments -Wformat: YES 00:01:57.071 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:57.071 Compiler for C supports arguments -Wformat-security: NO 00:01:57.071 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.071 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:57.071 Compiler for C supports arguments -Wnested-externs: YES 00:01:57.071 Compiler for C supports arguments -Wold-style-definition: YES 00:01:57.071 Compiler for C supports arguments -Wpointer-arith: YES 00:01:57.071 Compiler for C supports arguments -Wsign-compare: YES 00:01:57.071 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:57.071 Compiler for C supports arguments -Wundef: YES 00:01:57.071 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.071 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:57.071 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:57.071 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.071 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:57.071 Compiler for C supports arguments -mavx512f: YES 00:01:57.071 Checking if "AVX512 checking" compiles: YES 00:01:57.071 Fetching value of define "__SSE4_2__" : 1 00:01:57.071 Fetching value of define "__AES__" : 1 00:01:57.071 Fetching value of define "__AVX__" : 1 00:01:57.071 Fetching value of define "__AVX2__" : 1 00:01:57.071 Fetching value of define "__AVX512BW__" : 1 00:01:57.071 Fetching value of define "__AVX512CD__" : 1 00:01:57.071 Fetching value of define "__AVX512DQ__" : 1 00:01:57.071 Fetching value of define "__AVX512F__" : 1 00:01:57.071 Fetching value of define "__AVX512VL__" : 1 00:01:57.071 Fetching value of define "__PCLMUL__" : 1 00:01:57.071 Fetching value of define "__RDRND__" : 1 00:01:57.071 Fetching value of define "__RDSEED__" : 1 00:01:57.071 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:57.071 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:57.071 Message: lib/kvargs: Defining dependency "kvargs" 00:01:57.071 Message: lib/telemetry: Defining dependency "telemetry" 00:01:57.071 Checking for function "getentropy" : YES 00:01:57.071 Message: lib/eal: Defining dependency "eal" 00:01:57.071 Message: lib/ring: Defining dependency "ring" 00:01:57.071 Message: lib/rcu: Defining dependency "rcu" 00:01:57.071 Message: lib/mempool: Defining dependency "mempool" 00:01:57.071 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.071 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.071 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.071 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.071 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:57.071 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:57.071 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:57.071 Compiler for C supports arguments -mpclmul: YES 00:01:57.071 Compiler for C supports arguments -maes: YES 00:01:57.071 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.071 Compiler for C supports arguments -mavx512bw: YES 00:01:57.071 Compiler for C supports arguments -mavx512dq: YES 00:01:57.071 Compiler for C supports arguments -mavx512vl: YES 00:01:57.071 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.071 Compiler for C supports arguments -mavx2: YES 00:01:57.071 Compiler for C supports arguments -mavx: YES 00:01:57.071 Message: lib/net: Defining dependency "net" 00:01:57.071 Message: lib/meter: Defining dependency "meter" 00:01:57.071 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.071 Message: lib/pci: Defining dependency "pci" 00:01:57.071 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.071 Message: lib/metrics: Defining dependency "metrics" 00:01:57.071 Message: lib/hash: Defining dependency "hash" 00:01:57.071 Message: lib/timer: Defining dependency "timer" 00:01:57.071 Fetching value of define "__AVX2__" : 1 (cached) 00:01:57.071 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.071 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:57.071 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:57.071 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.071 Message: lib/acl: Defining dependency "acl" 00:01:57.071 Message: lib/bbdev: Defining dependency "bbdev" 00:01:57.071 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:57.071 Run-time dependency libelf found: YES 0.191 00:01:57.071 Message: lib/bpf: Defining dependency "bpf" 00:01:57.071 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:57.071 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.071 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.071 Message: lib/distributor: Defining dependency "distributor" 00:01:57.071 Message: lib/efd: Defining dependency "efd" 00:01:57.071 Message: lib/eventdev: Defining dependency "eventdev" 00:01:57.071 Message: lib/gpudev: Defining dependency "gpudev" 00:01:57.071 Message: lib/gro: Defining dependency "gro" 00:01:57.071 Message: lib/gso: Defining dependency "gso" 00:01:57.071 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:57.071 Message: lib/jobstats: Defining dependency "jobstats" 00:01:57.071 Message: lib/latencystats: Defining dependency "latencystats" 00:01:57.071 Message: lib/lpm: Defining dependency "lpm" 00:01:57.071 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.071 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:57.072 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:57.072 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:57.072 Message: lib/member: Defining dependency "member" 00:01:57.072 Message: lib/pcapng: Defining dependency "pcapng" 00:01:57.072 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.072 Message: lib/power: Defining dependency "power" 00:01:57.072 Message: lib/rawdev: Defining dependency "rawdev" 00:01:57.072 Message: lib/regexdev: Defining dependency "regexdev" 00:01:57.072 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.072 Message: lib/rib: Defining dependency "rib" 00:01:57.072 Message: lib/reorder: Defining dependency "reorder" 00:01:57.072 Message: lib/sched: Defining dependency "sched" 00:01:57.072 Message: lib/security: Defining dependency "security" 00:01:57.072 Message: lib/stack: Defining dependency "stack" 00:01:57.072 Has header "linux/userfaultfd.h" : YES 00:01:57.072 Message: lib/vhost: Defining dependency "vhost" 00:01:57.072 Message: lib/ipsec: Defining dependency "ipsec" 00:01:57.072 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.072 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:57.072 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.072 Message: lib/fib: Defining dependency "fib" 00:01:57.072 Message: lib/port: Defining dependency "port" 00:01:57.072 Message: lib/pdump: Defining dependency "pdump" 00:01:57.072 Message: lib/table: Defining dependency "table" 00:01:57.072 Message: lib/pipeline: Defining dependency "pipeline" 00:01:57.072 Message: lib/graph: Defining dependency "graph" 00:01:57.072 Message: lib/node: Defining dependency "node" 00:01:57.072 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.072 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.072 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.072 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.072 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:57.072 Compiler for C supports arguments -Wno-unused-value: YES 00:01:57.072 Compiler for C supports arguments -Wno-format: YES 00:01:57.072 Compiler for C supports arguments -Wno-format-security: YES 00:01:57.072 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:57.647 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:57.647 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:57.647 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:57.647 Fetching value of define "__AVX2__" : 1 (cached) 00:01:57.647 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.647 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.647 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.647 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:57.647 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:57.647 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:57.647 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:57.647 Configuring doxy-api.conf using configuration 00:01:57.647 Program sphinx-build found: NO 00:01:57.647 Configuring rte_build_config.h using configuration 00:01:57.647 Message: 00:01:57.647 ================= 00:01:57.647 Applications Enabled 00:01:57.647 ================= 00:01:57.647 00:01:57.647 apps: 00:01:57.647 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:57.647 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:57.647 test-security-perf, 00:01:57.647 00:01:57.647 Message: 00:01:57.647 ================= 00:01:57.647 Libraries Enabled 00:01:57.647 ================= 00:01:57.647 00:01:57.647 libs: 00:01:57.647 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:57.647 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:57.647 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:57.647 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:57.647 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:57.647 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:57.647 table, pipeline, graph, node, 00:01:57.647 00:01:57.647 Message: 00:01:57.647 =============== 00:01:57.647 Drivers Enabled 00:01:57.647 =============== 00:01:57.647 00:01:57.647 common: 00:01:57.647 00:01:57.647 bus: 00:01:57.647 pci, vdev, 00:01:57.647 mempool: 00:01:57.647 ring, 00:01:57.647 dma: 00:01:57.647 00:01:57.647 net: 00:01:57.647 i40e, 00:01:57.647 raw: 00:01:57.647 00:01:57.647 crypto: 00:01:57.647 00:01:57.647 compress: 00:01:57.647 00:01:57.647 regex: 00:01:57.647 00:01:57.647 vdpa: 00:01:57.647 00:01:57.647 event: 00:01:57.647 00:01:57.647 baseband: 00:01:57.647 00:01:57.647 gpu: 00:01:57.647 00:01:57.647 00:01:57.647 Message: 00:01:57.647 ================= 00:01:57.647 Content Skipped 00:01:57.647 ================= 00:01:57.647 00:01:57.647 apps: 00:01:57.647 00:01:57.647 libs: 00:01:57.647 kni: explicitly disabled via build config (deprecated lib) 00:01:57.647 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:57.647 00:01:57.647 drivers: 00:01:57.647 common/cpt: not in enabled drivers build config 00:01:57.647 common/dpaax: not in enabled drivers build config 00:01:57.647 common/iavf: not in enabled drivers build config 00:01:57.647 common/idpf: not in enabled drivers build config 00:01:57.647 common/mvep: not in enabled drivers build config 00:01:57.647 common/octeontx: not in enabled drivers build config 00:01:57.647 bus/auxiliary: not in enabled drivers build config 00:01:57.647 bus/dpaa: not in enabled drivers build config 00:01:57.647 bus/fslmc: not in enabled drivers build config 00:01:57.647 bus/ifpga: not in enabled drivers build config 00:01:57.647 bus/vmbus: not in enabled drivers build config 00:01:57.647 common/cnxk: not in enabled drivers build config 00:01:57.647 common/mlx5: not in enabled drivers build config 00:01:57.647 common/qat: not in enabled drivers build config 00:01:57.647 common/sfc_efx: not in enabled drivers build config 00:01:57.647 mempool/bucket: not in enabled drivers build config 00:01:57.647 mempool/cnxk: not in enabled drivers build config 00:01:57.647 mempool/dpaa: not in enabled drivers build config 00:01:57.647 mempool/dpaa2: not in enabled drivers build config 00:01:57.647 mempool/octeontx: not in enabled drivers build config 00:01:57.647 mempool/stack: not in enabled drivers build config 00:01:57.647 dma/cnxk: not in enabled drivers build config 00:01:57.647 dma/dpaa: not in enabled drivers build config 00:01:57.647 dma/dpaa2: not in enabled drivers build config 00:01:57.647 dma/hisilicon: not in enabled drivers build config 00:01:57.647 dma/idxd: not in enabled drivers build config 00:01:57.647 dma/ioat: not in enabled drivers build config 00:01:57.647 dma/skeleton: not in enabled drivers build config 00:01:57.647 net/af_packet: not in enabled drivers build config 00:01:57.647 net/af_xdp: not in enabled drivers build config 00:01:57.647 net/ark: not in enabled drivers build config 00:01:57.647 net/atlantic: not in enabled drivers build config 00:01:57.647 net/avp: not in enabled drivers build config 00:01:57.647 net/axgbe: not in enabled drivers build config 00:01:57.647 net/bnx2x: not in enabled drivers build config 00:01:57.647 net/bnxt: not in enabled drivers build config 00:01:57.647 net/bonding: not in enabled drivers build config 00:01:57.647 net/cnxk: not in enabled drivers build config 00:01:57.647 net/cxgbe: not in enabled drivers build config 00:01:57.647 net/dpaa: not in enabled drivers build config 00:01:57.647 net/dpaa2: not in enabled drivers build config 00:01:57.647 net/e1000: not in enabled drivers build config 00:01:57.647 net/ena: not in enabled drivers build config 00:01:57.647 net/enetc: not in enabled drivers build config 00:01:57.647 net/enetfec: not in enabled drivers build config 00:01:57.647 net/enic: not in enabled drivers build config 00:01:57.647 net/failsafe: not in enabled drivers build config 00:01:57.647 net/fm10k: not in enabled drivers build config 00:01:57.647 net/gve: not in enabled drivers build config 00:01:57.647 net/hinic: not in enabled drivers build config 00:01:57.647 net/hns3: not in enabled drivers build config 00:01:57.647 net/iavf: not in enabled drivers build config 00:01:57.647 net/ice: not in enabled drivers build config 00:01:57.647 net/idpf: not in enabled drivers build config 00:01:57.647 net/igc: not in enabled drivers build config 00:01:57.647 net/ionic: not in enabled drivers build config 00:01:57.647 net/ipn3ke: not in enabled drivers build config 00:01:57.647 net/ixgbe: not in enabled drivers build config 00:01:57.647 net/kni: not in enabled drivers build config 00:01:57.647 net/liquidio: not in enabled drivers build config 00:01:57.647 net/mana: not in enabled drivers build config 00:01:57.647 net/memif: not in enabled drivers build config 00:01:57.647 net/mlx4: not in enabled drivers build config 00:01:57.647 net/mlx5: not in enabled drivers build config 00:01:57.647 net/mvneta: not in enabled drivers build config 00:01:57.647 net/mvpp2: not in enabled drivers build config 00:01:57.647 net/netvsc: not in enabled drivers build config 00:01:57.647 net/nfb: not in enabled drivers build config 00:01:57.647 net/nfp: not in enabled drivers build config 00:01:57.647 net/ngbe: not in enabled drivers build config 00:01:57.647 net/null: not in enabled drivers build config 00:01:57.647 net/octeontx: not in enabled drivers build config 00:01:57.647 net/octeon_ep: not in enabled drivers build config 00:01:57.647 net/pcap: not in enabled drivers build config 00:01:57.647 net/pfe: not in enabled drivers build config 00:01:57.647 net/qede: not in enabled drivers build config 00:01:57.647 net/ring: not in enabled drivers build config 00:01:57.647 net/sfc: not in enabled drivers build config 00:01:57.647 net/softnic: not in enabled drivers build config 00:01:57.647 net/tap: not in enabled drivers build config 00:01:57.647 net/thunderx: not in enabled drivers build config 00:01:57.647 net/txgbe: not in enabled drivers build config 00:01:57.647 net/vdev_netvsc: not in enabled drivers build config 00:01:57.647 net/vhost: not in enabled drivers build config 00:01:57.648 net/virtio: not in enabled drivers build config 00:01:57.648 net/vmxnet3: not in enabled drivers build config 00:01:57.648 raw/cnxk_bphy: not in enabled drivers build config 00:01:57.648 raw/cnxk_gpio: not in enabled drivers build config 00:01:57.648 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:57.648 raw/ifpga: not in enabled drivers build config 00:01:57.648 raw/ntb: not in enabled drivers build config 00:01:57.648 raw/skeleton: not in enabled drivers build config 00:01:57.648 crypto/armv8: not in enabled drivers build config 00:01:57.648 crypto/bcmfs: not in enabled drivers build config 00:01:57.648 crypto/caam_jr: not in enabled drivers build config 00:01:57.648 crypto/ccp: not in enabled drivers build config 00:01:57.648 crypto/cnxk: not in enabled drivers build config 00:01:57.648 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.648 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.648 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.648 crypto/mlx5: not in enabled drivers build config 00:01:57.648 crypto/mvsam: not in enabled drivers build config 00:01:57.648 crypto/nitrox: not in enabled drivers build config 00:01:57.648 crypto/null: not in enabled drivers build config 00:01:57.648 crypto/octeontx: not in enabled drivers build config 00:01:57.648 crypto/openssl: not in enabled drivers build config 00:01:57.648 crypto/scheduler: not in enabled drivers build config 00:01:57.648 crypto/uadk: not in enabled drivers build config 00:01:57.648 crypto/virtio: not in enabled drivers build config 00:01:57.648 compress/isal: not in enabled drivers build config 00:01:57.648 compress/mlx5: not in enabled drivers build config 00:01:57.648 compress/octeontx: not in enabled drivers build config 00:01:57.648 compress/zlib: not in enabled drivers build config 00:01:57.648 regex/mlx5: not in enabled drivers build config 00:01:57.648 regex/cn9k: not in enabled drivers build config 00:01:57.648 vdpa/ifc: not in enabled drivers build config 00:01:57.648 vdpa/mlx5: not in enabled drivers build config 00:01:57.648 vdpa/sfc: not in enabled drivers build config 00:01:57.648 event/cnxk: not in enabled drivers build config 00:01:57.648 event/dlb2: not in enabled drivers build config 00:01:57.648 event/dpaa: not in enabled drivers build config 00:01:57.648 event/dpaa2: not in enabled drivers build config 00:01:57.648 event/dsw: not in enabled drivers build config 00:01:57.648 event/opdl: not in enabled drivers build config 00:01:57.648 event/skeleton: not in enabled drivers build config 00:01:57.648 event/sw: not in enabled drivers build config 00:01:57.648 event/octeontx: not in enabled drivers build config 00:01:57.648 baseband/acc: not in enabled drivers build config 00:01:57.648 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:57.648 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:57.648 baseband/la12xx: not in enabled drivers build config 00:01:57.648 baseband/null: not in enabled drivers build config 00:01:57.648 baseband/turbo_sw: not in enabled drivers build config 00:01:57.648 gpu/cuda: not in enabled drivers build config 00:01:57.648 00:01:57.648 00:01:57.648 Build targets in project: 311 00:01:57.648 00:01:57.648 DPDK 22.11.4 00:01:57.648 00:01:57.648 User defined options 00:01:57.648 libdir : lib 00:01:57.648 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:57.648 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:57.648 c_link_args : 00:01:57.648 enable_docs : false 00:01:57.648 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:57.648 enable_kmods : false 00:01:57.648 machine : native 00:01:57.648 tests : false 00:01:57.648 00:01:57.648 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.648 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:57.648 17:04:54 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:01:57.648 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:57.648 [1/740] Generating lib/rte_kvargs_def with a custom command 00:01:57.648 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:57.648 [3/740] Generating lib/rte_telemetry_def with a custom command 00:01:57.648 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:57.648 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:57.648 [6/740] Generating lib/rte_eal_def with a custom command 00:01:57.648 [7/740] Generating lib/rte_eal_mingw with a custom command 00:01:57.648 [8/740] Generating lib/rte_ring_mingw with a custom command 00:01:57.648 [9/740] Generating lib/rte_rcu_def with a custom command 00:01:57.648 [10/740] Generating lib/rte_rcu_mingw with a custom command 00:01:57.648 [11/740] Generating lib/rte_mempool_def with a custom command 00:01:57.909 [12/740] Generating lib/rte_ring_def with a custom command 00:01:57.909 [13/740] Generating lib/rte_mbuf_def with a custom command 00:01:57.909 [14/740] Generating lib/rte_net_def with a custom command 00:01:57.909 [15/740] Generating lib/rte_meter_def with a custom command 00:01:57.909 [16/740] Generating lib/rte_meter_mingw with a custom command 00:01:57.909 [17/740] Generating lib/rte_mempool_mingw with a custom command 00:01:57.909 [18/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:57.909 [19/740] Generating lib/rte_net_mingw with a custom command 00:01:57.909 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:57.909 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.909 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.909 [23/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:57.909 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:57.909 [25/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:57.909 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:57.909 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:57.909 [28/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:57.909 [29/740] Generating lib/rte_pci_def with a custom command 00:01:57.909 [30/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:57.909 [31/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:57.909 [32/740] Generating lib/rte_ethdev_def with a custom command 00:01:57.909 [33/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:57.909 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:57.909 [35/740] Generating lib/rte_pci_mingw with a custom command 00:01:57.909 [36/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:57.909 [37/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:57.909 [38/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.909 [39/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:57.909 [40/740] Linking static target lib/librte_kvargs.a 00:01:57.909 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:57.909 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:57.909 [43/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:57.909 [44/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:57.909 [45/740] Generating lib/rte_cmdline_def with a custom command 00:01:57.909 [46/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:57.909 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.909 [48/740] Generating lib/rte_metrics_def with a custom command 00:01:57.909 [49/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:57.909 [50/740] Generating lib/rte_metrics_mingw with a custom command 00:01:57.909 [51/740] Generating lib/rte_hash_def with a custom command 00:01:57.909 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:57.909 [53/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:57.909 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:57.909 [55/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:57.909 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:57.909 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:57.909 [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:57.909 [59/740] Generating lib/rte_timer_mingw with a custom command 00:01:57.909 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:57.909 [61/740] Generating lib/rte_hash_mingw with a custom command 00:01:57.909 [62/740] Generating lib/rte_timer_def with a custom command 00:01:57.909 [63/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:57.909 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:57.909 [65/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:57.909 [66/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:57.909 [67/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:57.909 [68/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:57.909 [69/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:57.909 [70/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:57.909 [71/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:57.909 [72/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:57.909 [73/740] Generating lib/rte_acl_mingw with a custom command 00:01:57.909 [74/740] Generating lib/rte_acl_def with a custom command 00:01:57.909 [75/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:57.909 [76/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:57.909 [77/740] Generating lib/rte_bbdev_def with a custom command 00:01:57.909 [78/740] Generating lib/rte_bitratestats_def with a custom command 00:01:57.909 [79/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:57.909 [80/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:57.909 [81/740] Linking static target lib/librte_pci.a 00:01:57.909 [82/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:57.909 [83/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:57.909 [84/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.172 [85/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.172 [86/740] Generating lib/rte_bpf_def with a custom command 00:01:58.172 [87/740] Generating lib/rte_bpf_mingw with a custom command 00:01:58.172 [88/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.172 [89/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:58.172 [90/740] Generating lib/rte_cfgfile_def with a custom command 00:01:58.172 [91/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:58.172 [92/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:58.172 [93/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.172 [94/740] Generating lib/rte_compressdev_def with a custom command 00:01:58.172 [95/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.172 [96/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:58.172 [97/740] Generating lib/rte_compressdev_mingw with a custom command 00:01:58.172 [98/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.172 [99/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.172 [100/740] Linking static target lib/librte_meter.a 00:01:58.172 [101/740] Linking static target lib/librte_ring.a 00:01:58.172 [102/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:58.172 [103/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.172 [104/740] Generating lib/rte_cryptodev_mingw with a custom command 00:01:58.172 [105/740] Generating lib/rte_cryptodev_def with a custom command 00:01:58.172 [106/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:58.172 [107/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.172 [108/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.172 [109/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:58.172 [110/740] Generating lib/rte_distributor_mingw with a custom command 00:01:58.172 [111/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:58.172 [112/740] Generating lib/rte_distributor_def with a custom command 00:01:58.172 [113/740] Generating lib/rte_efd_def with a custom command 00:01:58.172 [114/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.172 [115/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:58.172 [116/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.172 [117/740] Generating lib/rte_efd_mingw with a custom command 00:01:58.172 [118/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:58.172 [119/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.172 [120/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.172 [121/740] Generating lib/rte_gpudev_def with a custom command 00:01:58.172 [122/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:58.172 [123/740] Generating lib/rte_eventdev_def with a custom command 00:01:58.172 [124/740] Generating lib/rte_eventdev_mingw with a custom command 00:01:58.172 [125/740] Generating lib/rte_gpudev_mingw with a custom command 00:01:58.172 [126/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.172 [127/740] Generating lib/rte_gro_mingw with a custom command 00:01:58.172 [128/740] Generating lib/rte_gro_def with a custom command 00:01:58.172 [129/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.172 [130/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:58.172 [131/740] Generating lib/rte_gso_def with a custom command 00:01:58.172 [132/740] Generating lib/rte_gso_mingw with a custom command 00:01:58.172 [133/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.437 [134/740] Generating lib/rte_ip_frag_def with a custom command 00:01:58.437 [135/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.437 [136/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.437 [137/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.437 [138/740] Generating lib/rte_ip_frag_mingw with a custom command 00:01:58.437 [139/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.437 [140/740] Linking target lib/librte_kvargs.so.23.0 00:01:58.437 [141/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.437 [142/740] Generating lib/rte_jobstats_mingw with a custom command 00:01:58.437 [143/740] Generating lib/rte_jobstats_def with a custom command 00:01:58.437 [144/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:58.437 [145/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:58.437 [146/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:58.437 [147/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.437 [148/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.437 [149/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.437 [150/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.437 [151/740] Linking static target lib/librte_cfgfile.a 00:01:58.437 [152/740] Generating lib/rte_latencystats_def with a custom command 00:01:58.437 [153/740] Generating lib/rte_latencystats_mingw with a custom command 00:01:58.437 [154/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.437 [155/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.437 [156/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.437 [157/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.437 [158/740] Generating lib/rte_lpm_def with a custom command 00:01:58.437 [159/740] Generating lib/rte_lpm_mingw with a custom command 00:01:58.437 [160/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.437 [161/740] Generating lib/rte_member_mingw with a custom command 00:01:58.437 [162/740] Generating lib/rte_member_def with a custom command 00:01:58.437 [163/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.437 [164/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:58.437 [165/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:58.437 [166/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.437 [167/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:58.437 [168/740] Generating lib/rte_pcapng_def with a custom command 00:01:58.437 [169/740] Generating lib/rte_pcapng_mingw with a custom command 00:01:58.437 [170/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.437 [171/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:58.437 [172/740] Linking static target lib/librte_jobstats.a 00:01:58.437 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:58.437 [174/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.699 [175/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.699 [176/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:58.699 [177/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.699 [178/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:58.699 [179/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:58.699 [180/740] Generating lib/rte_power_def with a custom command 00:01:58.699 [181/740] Linking static target lib/librte_cmdline.a 00:01:58.699 [182/740] Generating lib/rte_power_mingw with a custom command 00:01:58.699 [183/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.699 [184/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.699 [185/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.699 [186/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:58.699 [187/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.699 [188/740] Generating lib/rte_rawdev_def with a custom command 00:01:58.699 [189/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.699 [190/740] Linking static target lib/librte_timer.a 00:01:58.699 [191/740] Linking static target lib/librte_telemetry.a 00:01:58.699 [192/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:58.699 [193/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:58.699 [194/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:58.699 [195/740] Linking static target lib/librte_metrics.a 00:01:58.699 [196/740] Generating lib/rte_rawdev_mingw with a custom command 00:01:58.699 [197/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:58.699 [198/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:58.699 [199/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:58.699 [200/740] Generating lib/rte_regexdev_mingw with a custom command 00:01:58.699 [201/740] Generating lib/rte_regexdev_def with a custom command 00:01:58.699 [202/740] Generating lib/rte_dmadev_def with a custom command 00:01:58.699 [203/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:58.699 [204/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:58.699 [205/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:58.699 [206/740] Generating lib/rte_dmadev_mingw with a custom command 00:01:58.699 [207/740] Generating lib/rte_rib_def with a custom command 00:01:58.699 [208/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.699 [209/740] Generating lib/rte_rib_mingw with a custom command 00:01:58.699 [210/740] Generating lib/rte_reorder_def with a custom command 00:01:58.699 [211/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.699 [212/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:58.699 [213/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:58.699 [214/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.699 [215/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.699 [216/740] Generating lib/rte_reorder_mingw with a custom command 00:01:58.699 [217/740] Generating lib/rte_sched_def with a custom command 00:01:58.699 [218/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:58.699 [219/740] Linking static target lib/librte_net.a 00:01:58.699 [220/740] Generating lib/rte_sched_mingw with a custom command 00:01:58.699 [221/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.699 [222/740] Linking static target lib/librte_bitratestats.a 00:01:58.699 [223/740] Generating lib/rte_security_def with a custom command 00:01:58.699 [224/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.699 [225/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:58.699 [226/740] Generating lib/rte_security_mingw with a custom command 00:01:58.699 [227/740] Generating lib/rte_stack_mingw with a custom command 00:01:58.699 [228/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:58.699 [229/740] Generating lib/rte_stack_def with a custom command 00:01:58.699 [230/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:58.699 [231/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.700 [232/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:58.700 [233/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.700 [234/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:58.700 [235/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:58.700 [236/740] Generating lib/rte_vhost_def with a custom command 00:01:58.700 [237/740] Generating lib/rte_vhost_mingw with a custom command 00:01:58.700 [238/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.700 [239/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:58.700 [240/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:58.700 [241/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:58.959 [242/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:58.959 [243/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:58.959 [244/740] Generating lib/rte_ipsec_mingw with a custom command 00:01:58.959 [245/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:58.959 [246/740] Generating lib/rte_ipsec_def with a custom command 00:01:58.959 [247/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.959 [248/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:58.959 [249/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:58.959 [250/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:58.959 [251/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:58.959 [252/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:58.959 [253/740] Generating lib/rte_fib_def with a custom command 00:01:58.959 [254/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:58.959 [255/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:58.959 [256/740] Linking static target lib/librte_stack.a 00:01:58.959 [257/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:58.959 [258/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:58.959 [259/740] Generating lib/rte_fib_mingw with a custom command 00:01:58.959 [260/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:58.959 [261/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:58.959 [262/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:58.959 [263/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:58.959 [264/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.959 [265/740] Generating lib/rte_port_def with a custom command 00:01:58.959 [266/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.959 [267/740] Generating lib/rte_port_mingw with a custom command 00:01:58.959 [268/740] Linking static target lib/librte_compressdev.a 00:01:58.959 [269/740] Generating lib/rte_pdump_mingw with a custom command 00:01:58.959 [270/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:58.959 [271/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:58.959 [272/740] Generating lib/rte_pdump_def with a custom command 00:01:58.959 [273/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.959 [274/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.959 [275/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:58.959 [276/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:58.959 [277/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:58.959 [278/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:58.959 [279/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.959 [280/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:58.959 [281/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.959 [282/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.959 [283/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:58.959 [284/740] Linking static target lib/librte_rcu.a 00:01:58.959 [285/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:58.959 [286/740] Linking static target lib/librte_mempool.a 00:01:58.959 [287/740] Linking static target lib/librte_rawdev.a 00:01:59.222 [288/740] Generating lib/rte_table_def with a custom command 00:01:59.222 [289/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.222 [290/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:59.222 [291/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.222 [292/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:59.222 [293/740] Generating lib/rte_table_mingw with a custom command 00:01:59.222 [294/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:59.222 [295/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:59.222 [296/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:59.222 [297/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:59.222 [298/740] Linking static target lib/librte_bbdev.a 00:01:59.222 [299/740] Linking static target lib/librte_gro.a 00:01:59.222 [300/740] Linking static target lib/librte_gpudev.a 00:01:59.222 [301/740] Linking static target lib/librte_dmadev.a 00:01:59.222 [302/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:59.222 [303/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:59.222 [304/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:59.222 [305/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:59.222 [306/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:59.222 [307/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.222 [308/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.222 [309/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.222 [310/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:59.222 [311/740] Generating lib/rte_pipeline_def with a custom command 00:01:59.222 [312/740] Generating lib/rte_pipeline_mingw with a custom command 00:01:59.222 [313/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.222 [314/740] Linking static target lib/librte_gso.a 00:01:59.222 [315/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:59.222 [316/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:59.222 [317/740] Linking static target lib/librte_latencystats.a 00:01:59.222 [318/740] Linking target lib/librte_telemetry.so.23.0 00:01:59.222 [319/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:59.222 [320/740] Generating lib/rte_graph_def with a custom command 00:01:59.222 [321/740] Generating lib/rte_graph_mingw with a custom command 00:01:59.222 [322/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:59.222 [323/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:59.222 [324/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:59.222 [325/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:59.222 [326/740] Linking static target lib/librte_distributor.a 00:01:59.487 [327/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:59.487 [328/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:59.487 [329/740] Linking static target lib/librte_ip_frag.a 00:01:59.487 [330/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:59.487 [331/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:59.487 [332/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:59.487 [333/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:59.487 [334/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:59.487 [335/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:59.487 [336/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:59.487 [337/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:59.487 [338/740] Linking static target lib/librte_regexdev.a 00:01:59.487 [339/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:59.487 [340/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:59.487 [341/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:59.487 [342/740] Generating lib/rte_node_mingw with a custom command 00:01:59.487 [343/740] Generating lib/rte_node_def with a custom command 00:01:59.487 [344/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:59.487 [345/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.487 [346/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.487 [347/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:59.487 [348/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:59.487 [349/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:59.487 [350/740] Generating drivers/rte_bus_pci_def with a custom command 00:01:59.487 [351/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:59.487 [352/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:59.487 [353/740] Linking static target lib/librte_reorder.a 00:01:59.487 [354/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:59.487 [355/740] Linking static target lib/librte_eal.a 00:01:59.487 [356/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:59.487 [357/740] Generating drivers/rte_bus_vdev_def with a custom command 00:01:59.487 [358/740] Linking static target lib/librte_power.a 00:01:59.487 [359/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.487 [360/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.487 [361/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.487 [362/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:59.487 [363/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:59.487 [364/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:59.487 [365/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:59.487 [366/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.487 [367/740] Generating drivers/rte_mempool_ring_def with a custom command 00:01:59.487 [368/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.487 [369/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:59.746 [370/740] Linking static target lib/librte_security.a 00:01:59.746 [371/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:59.746 [372/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:59.746 [373/740] Linking static target lib/librte_pcapng.a 00:01:59.746 [374/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:59.746 [375/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:59.746 [376/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.746 [377/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.746 [378/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:59.746 [379/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:59.746 [380/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.746 [381/740] Linking static target lib/librte_mbuf.a 00:01:59.746 [382/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:59.746 [383/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.746 [384/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:59.746 [385/740] Linking static target lib/librte_bpf.a 00:01:59.746 [386/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:59.746 [387/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:59.746 [388/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:59.746 [389/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:59.746 [390/740] Generating drivers/rte_net_i40e_def with a custom command 00:01:59.746 [391/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:59.746 [392/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:59.746 [393/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:59.746 [394/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.746 [395/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.746 [396/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:00.007 [397/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:00.007 [398/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:00.007 [399/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:00.007 [400/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:00.007 [401/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:00.007 [402/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:00.007 [403/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:00.007 [404/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:00.007 [405/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:00.008 [406/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:00.008 [407/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:00.008 [408/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:00.008 [409/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:00.008 [410/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:00.008 [411/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:00.008 [412/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.008 [413/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:00.008 [414/740] Linking static target lib/librte_rib.a 00:02:00.008 [415/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.008 [416/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:00.008 [417/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:00.008 [418/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:00.008 [419/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:00.008 [420/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:00.008 [421/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:00.008 [422/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:00.008 [423/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.008 [424/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:00.008 [425/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:00.008 [426/740] Linking static target lib/librte_lpm.a 00:02:00.008 [427/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:00.008 [428/740] Linking static target lib/librte_graph.a 00:02:00.008 [429/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:00.008 [430/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.008 [431/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:00.008 [432/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:00.008 [433/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:00.008 [434/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:00.272 [435/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:00.272 [436/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:00.272 [437/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:00.272 [438/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:00.272 [439/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:00.272 [440/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.272 [441/740] Linking static target lib/librte_efd.a 00:02:00.272 [442/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:00.272 [443/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:00.272 [444/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:00.272 [445/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:00.272 [446/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:00.272 [447/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:00.272 [448/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.272 [449/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.272 [450/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:00.272 [451/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:00.272 [452/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.272 [453/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.272 [454/740] Linking static target drivers/librte_bus_vdev.a 00:02:00.272 [455/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.272 [456/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.272 [457/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:00.272 [458/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:00.272 [459/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:00.535 [460/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:00.535 [461/740] Linking static target lib/librte_fib.a 00:02:00.535 [462/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:00.535 [463/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:00.535 [464/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:00.535 [465/740] Linking static target lib/librte_pdump.a 00:02:00.535 [466/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.535 [467/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.535 [468/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:00.535 [469/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:00.535 [470/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:00.535 [471/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.535 [472/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.797 [473/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.797 [474/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.797 [475/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:00.797 [476/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:00.797 [477/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:00.797 [478/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:00.797 [479/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:00.797 [480/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:00.797 [481/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.797 [482/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.797 [483/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.797 [484/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:00.797 [485/740] Linking static target drivers/librte_bus_pci.a 00:02:00.797 [486/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:00.797 [487/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:00.797 [488/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:00.797 [489/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:00.797 [490/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:00.797 [491/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:00.797 [492/740] Linking static target lib/librte_table.a 00:02:00.797 [493/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:00.797 [494/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:01.058 [495/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:01.058 [496/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:01.058 [497/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:01.058 [498/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:01.058 [499/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:01.058 [500/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:01.058 [501/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:01.058 [502/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:01.058 [503/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:01.058 [504/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:01.058 [505/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:01.058 [506/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.058 [507/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.058 [508/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:01.058 [509/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:01.058 [510/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:01.058 [511/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:01.058 [512/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:01.058 [513/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:01.058 [514/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:01.058 [515/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:01.058 [516/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:01.058 [517/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.058 [518/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:01.058 [519/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:01.058 [520/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:01.058 [521/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:01.058 [522/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:01.058 [523/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:01.058 [524/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:01.317 [525/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:01.317 [526/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:01.317 [527/740] Linking static target lib/librte_cryptodev.a 00:02:01.317 [528/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:01.317 [529/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:01.317 [530/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:01.317 [531/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:01.317 [532/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:01.317 [533/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:01.317 [534/740] Linking static target lib/librte_sched.a 00:02:01.317 [535/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.317 [536/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:01.317 [537/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:01.317 [538/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:01.317 [539/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:01.317 [540/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:01.317 [541/740] Linking static target lib/librte_node.a 00:02:01.317 [542/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:01.317 [543/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.317 [544/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.317 [545/740] Linking static target drivers/librte_mempool_ring.a 00:02:01.317 [546/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.317 [547/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:01.317 [548/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:01.317 [549/740] Linking static target lib/librte_ipsec.a 00:02:01.317 [550/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:01.317 [551/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:01.317 [552/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:01.317 [553/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:01.317 [554/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:01.576 [555/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:01.576 [556/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:01.576 [557/740] Linking static target lib/librte_member.a 00:02:01.576 [558/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:01.576 [559/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:01.576 [560/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:01.576 [561/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:01.576 [562/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:01.576 [563/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:01.576 [564/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:01.576 [565/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.576 [566/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:01.576 [567/740] Linking static target lib/librte_ethdev.a 00:02:01.576 [568/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:01.576 [569/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:01.576 [570/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:01.576 [571/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.576 [572/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:01.576 [573/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:01.576 [574/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:01.576 [575/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:01.576 [576/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:01.576 [577/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:01.576 [578/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:01.576 [579/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:01.576 [580/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:01.834 [581/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:01.835 [582/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:01.835 [583/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:01.835 [584/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:01.835 [585/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:01.835 [586/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:01.835 [587/740] Linking static target lib/librte_port.a 00:02:01.835 [588/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:01.835 [589/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.835 [590/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.835 [591/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:01.835 [592/740] Linking static target lib/librte_eventdev.a 00:02:01.835 [593/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:01.835 [594/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:01.835 [595/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.835 [596/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:01.835 [597/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.093 [598/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:02.093 [599/740] Linking static target lib/librte_hash.a 00:02:02.093 [600/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:02.093 [601/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:02.093 [602/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:02.093 [603/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:02.093 [604/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:02.093 [605/740] Linking static target lib/librte_acl.a 00:02:02.093 [606/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:02.093 [607/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:02.093 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:02.093 [609/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:02.353 [610/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:02.353 [611/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:02.353 [612/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:02.611 [613/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.611 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:02.611 [615/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.611 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:02.869 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:03.128 [618/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.128 [619/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:03.386 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:03.645 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:03.904 [622/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:03.904 [623/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:04.163 [624/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:04.422 [625/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:04.422 [626/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:04.422 [627/740] Linking static target drivers/librte_net_i40e.a 00:02:04.987 [628/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:04.987 [629/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:04.987 [630/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.987 [631/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:05.245 [632/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.504 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.779 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.038 [635/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:11.038 [636/740] Linking static target lib/librte_vhost.a 00:02:11.976 [637/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:11.976 [638/740] Linking static target lib/librte_pipeline.a 00:02:12.236 [639/740] Linking target app/dpdk-dumpcap 00:02:12.236 [640/740] Linking target app/dpdk-test-acl 00:02:12.236 [641/740] Linking target app/dpdk-proc-info 00:02:12.236 [642/740] Linking target app/dpdk-test-cmdline 00:02:12.236 [643/740] Linking target app/dpdk-pdump 00:02:12.236 [644/740] Linking target app/dpdk-test-compress-perf 00:02:12.236 [645/740] Linking target app/dpdk-test-crypto-perf 00:02:12.236 [646/740] Linking target app/dpdk-test-eventdev 00:02:12.236 [647/740] Linking target app/dpdk-test-sad 00:02:12.236 [648/740] Linking target app/dpdk-test-fib 00:02:12.236 [649/740] Linking target app/dpdk-test-gpudev 00:02:12.236 [650/740] Linking target app/dpdk-test-security-perf 00:02:12.236 [651/740] Linking target app/dpdk-test-flow-perf 00:02:12.236 [652/740] Linking target app/dpdk-test-regex 00:02:12.236 [653/740] Linking target app/dpdk-test-pipeline 00:02:12.236 [654/740] Linking target app/dpdk-test-bbdev 00:02:12.236 [655/740] Linking target app/dpdk-testpmd 00:02:13.174 [656/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.744 [657/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.003 [658/740] Linking target lib/librte_eal.so.23.0 00:02:14.003 [659/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:14.263 [660/740] Linking target lib/librte_rawdev.so.23.0 00:02:14.263 [661/740] Linking target lib/librte_jobstats.so.23.0 00:02:14.263 [662/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:14.263 [663/740] Linking target lib/librte_ring.so.23.0 00:02:14.263 [664/740] Linking target lib/librte_pci.so.23.0 00:02:14.263 [665/740] Linking target lib/librte_timer.so.23.0 00:02:14.263 [666/740] Linking target lib/librte_meter.so.23.0 00:02:14.263 [667/740] Linking target lib/librte_cfgfile.so.23.0 00:02:14.263 [668/740] Linking target lib/librte_dmadev.so.23.0 00:02:14.263 [669/740] Linking target lib/librte_stack.so.23.0 00:02:14.263 [670/740] Linking target lib/librte_graph.so.23.0 00:02:14.263 [671/740] Linking target lib/librte_acl.so.23.0 00:02:14.263 [672/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:14.263 [673/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:14.263 [674/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:14.263 [675/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:14.263 [676/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:14.263 [677/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:14.263 [678/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:14.263 [679/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:14.263 [680/740] Linking target lib/librte_rcu.so.23.0 00:02:14.263 [681/740] Linking target lib/librte_mempool.so.23.0 00:02:14.263 [682/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:14.521 [683/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:14.521 [684/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:14.521 [685/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:14.521 [686/740] Linking target lib/librte_rib.so.23.0 00:02:14.521 [687/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:14.521 [688/740] Linking target lib/librte_mbuf.so.23.0 00:02:14.521 [689/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:14.780 [690/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:14.780 [691/740] Linking target lib/librte_fib.so.23.0 00:02:14.780 [692/740] Linking target lib/librte_bbdev.so.23.0 00:02:14.780 [693/740] Linking target lib/librte_regexdev.so.23.0 00:02:14.780 [694/740] Linking target lib/librte_net.so.23.0 00:02:14.780 [695/740] Linking target lib/librte_compressdev.so.23.0 00:02:14.780 [696/740] Linking target lib/librte_distributor.so.23.0 00:02:14.780 [697/740] Linking target lib/librte_gpudev.so.23.0 00:02:14.780 [698/740] Linking target lib/librte_reorder.so.23.0 00:02:14.780 [699/740] Linking target lib/librte_cryptodev.so.23.0 00:02:14.780 [700/740] Linking target lib/librte_sched.so.23.0 00:02:14.780 [701/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:14.780 [702/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:14.780 [703/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:14.780 [704/740] Linking target lib/librte_hash.so.23.0 00:02:15.039 [705/740] Linking target lib/librte_cmdline.so.23.0 00:02:15.039 [706/740] Linking target lib/librte_ethdev.so.23.0 00:02:15.039 [707/740] Linking target lib/librte_security.so.23.0 00:02:15.039 [708/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:15.039 [709/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:15.039 [710/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:15.039 [711/740] Linking target lib/librte_efd.so.23.0 00:02:15.039 [712/740] Linking target lib/librte_lpm.so.23.0 00:02:15.039 [713/740] Linking target lib/librte_member.so.23.0 00:02:15.039 [714/740] Linking target lib/librte_gso.so.23.0 00:02:15.039 [715/740] Linking target lib/librte_ipsec.so.23.0 00:02:15.039 [716/740] Linking target lib/librte_ip_frag.so.23.0 00:02:15.039 [717/740] Linking target lib/librte_metrics.so.23.0 00:02:15.039 [718/740] Linking target lib/librte_gro.so.23.0 00:02:15.039 [719/740] Linking target lib/librte_pcapng.so.23.0 00:02:15.039 [720/740] Linking target lib/librte_bpf.so.23.0 00:02:15.039 [721/740] Linking target lib/librte_power.so.23.0 00:02:15.039 [722/740] Linking target lib/librte_eventdev.so.23.0 00:02:15.039 [723/740] Linking target lib/librte_vhost.so.23.0 00:02:15.039 [724/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:15.299 [725/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:15.299 [726/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:15.299 [727/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:15.299 [728/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:15.299 [729/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:15.299 [730/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:15.299 [731/740] Linking target lib/librte_node.so.23.0 00:02:15.299 [732/740] Linking target lib/librte_bitratestats.so.23.0 00:02:15.299 [733/740] Linking target lib/librte_latencystats.so.23.0 00:02:15.299 [734/740] Linking target lib/librte_pdump.so.23.0 00:02:15.299 [735/740] Linking target lib/librte_port.so.23.0 00:02:15.558 [736/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:15.558 [737/740] Linking target lib/librte_table.so.23.0 00:02:15.558 [738/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:16.938 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.938 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:17.197 17:05:13 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:02:17.197 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:17.197 [0/1] Installing files. 00:02:17.461 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.461 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.462 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.463 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.464 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.465 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:17.466 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:17.466 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.466 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.467 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.730 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:17.731 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:17.731 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:17.731 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.731 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:17.731 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.731 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.732 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.733 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:17.734 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:17.734 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:17.734 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:17.734 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:17.735 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:17.735 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:17.735 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:17.735 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:17.735 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:17.735 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:17.735 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:17.735 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:17.735 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:17.735 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:17.735 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:17.735 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:17.735 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:02:17.735 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:17.735 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:17.735 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:17.735 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:17.735 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:17.735 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:17.735 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:17.735 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:17.735 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:17.735 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:17.735 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:17.735 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:17.735 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:17.735 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:17.735 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:17.735 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:17.735 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:17.735 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:17.735 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:17.735 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:17.735 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:17.735 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:17.735 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:17.735 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:17.735 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:17.735 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:17.735 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:17.735 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:17.735 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:17.735 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:17.735 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:17.735 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:17.735 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:17.735 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:17.735 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:17.735 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:17.735 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:17.735 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:17.735 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:17.735 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:17.735 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:17.735 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:17.735 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:17.735 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:17.735 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:17.735 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:17.735 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:17.735 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:17.735 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:17.735 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:02:17.735 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:17.735 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:17.735 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:17.735 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:02:17.735 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:17.735 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:17.735 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:17.735 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:17.735 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:17.735 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:17.735 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:17.735 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:17.735 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:17.735 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:17.735 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:17.735 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:17.735 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:17.735 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:02:17.735 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:17.735 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:17.736 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:17.736 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:17.736 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:17.736 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:17.736 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:17.736 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:17.736 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:17.736 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:02:17.736 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:17.736 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:17.736 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:17.736 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:17.736 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:17.736 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:17.736 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:17.736 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:17.736 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:17.736 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:17.736 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:17.736 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:17.736 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:17.736 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:17.736 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:17.736 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:02:17.736 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:17.736 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:17.736 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:17.736 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:17.736 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:17.736 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:02:17.736 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:17.736 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:17.736 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:17.736 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:17.736 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:17.736 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:17.736 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:17.736 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:17.736 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:17.736 17:05:14 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:17.736 17:05:14 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:17.736 17:05:14 -- common/autobuild_common.sh@203 -- $ cat 00:02:17.736 17:05:14 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:17.736 00:02:17.736 real 0m26.153s 00:02:17.736 user 6m37.569s 00:02:17.736 sys 2m14.422s 00:02:17.736 17:05:14 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:17.736 17:05:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.736 ************************************ 00:02:17.736 END TEST build_native_dpdk 00:02:17.736 ************************************ 00:02:17.996 17:05:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:17.996 17:05:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:17.996 17:05:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:17.996 17:05:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:17.996 17:05:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:17.996 17:05:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:17.996 17:05:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:17.996 17:05:14 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:02:17.996 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:18.255 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:18.255 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:18.255 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:18.514 Using 'verbs' RDMA provider 00:02:34.053 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:46.276 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:46.276 Creating mk/config.mk...done. 00:02:46.276 Creating mk/cc.flags.mk...done. 00:02:46.276 Type 'make' to build. 00:02:46.276 17:05:42 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:46.276 17:05:42 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:46.276 17:05:42 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:46.276 17:05:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.276 ************************************ 00:02:46.276 START TEST make 00:02:46.276 ************************************ 00:02:46.276 17:05:42 -- common/autotest_common.sh@1114 -- $ make -j112 00:02:46.276 make[1]: Nothing to be done for 'all'. 00:02:56.265 CC lib/log/log.o 00:02:56.265 CC lib/ut_mock/mock.o 00:02:56.265 CC lib/log/log_flags.o 00:02:56.265 CC lib/log/log_deprecated.o 00:02:56.265 CC lib/ut/ut.o 00:02:56.265 LIB libspdk_ut_mock.a 00:02:56.265 LIB libspdk_log.a 00:02:56.265 LIB libspdk_ut.a 00:02:56.265 SO libspdk_ut_mock.so.5.0 00:02:56.265 SO libspdk_log.so.6.1 00:02:56.265 SO libspdk_ut.so.1.0 00:02:56.265 SYMLINK libspdk_ut_mock.so 00:02:56.265 SYMLINK libspdk_log.so 00:02:56.265 SYMLINK libspdk_ut.so 00:02:56.265 CC lib/dma/dma.o 00:02:56.265 CC lib/ioat/ioat.o 00:02:56.265 CXX lib/trace_parser/trace.o 00:02:56.265 CC lib/util/base64.o 00:02:56.265 CC lib/util/bit_array.o 00:02:56.265 CC lib/util/cpuset.o 00:02:56.265 CC lib/util/crc16.o 00:02:56.265 CC lib/util/crc32.o 00:02:56.265 CC lib/util/crc32c.o 00:02:56.265 CC lib/util/crc32_ieee.o 00:02:56.265 CC lib/util/crc64.o 00:02:56.265 CC lib/util/dif.o 00:02:56.265 CC lib/util/fd.o 00:02:56.265 CC lib/util/file.o 00:02:56.265 CC lib/util/hexlify.o 00:02:56.265 CC lib/util/iov.o 00:02:56.265 CC lib/util/math.o 00:02:56.265 CC lib/util/pipe.o 00:02:56.265 CC lib/util/strerror_tls.o 00:02:56.265 CC lib/util/string.o 00:02:56.265 CC lib/util/uuid.o 00:02:56.265 CC lib/util/xor.o 00:02:56.265 CC lib/util/fd_group.o 00:02:56.265 CC lib/util/zipf.o 00:02:56.265 CC lib/vfio_user/host/vfio_user_pci.o 00:02:56.265 CC lib/vfio_user/host/vfio_user.o 00:02:56.265 LIB libspdk_dma.a 00:02:56.265 SO libspdk_dma.so.3.0 00:02:56.524 LIB libspdk_ioat.a 00:02:56.524 SYMLINK libspdk_dma.so 00:02:56.524 SO libspdk_ioat.so.6.0 00:02:56.524 LIB libspdk_vfio_user.a 00:02:56.524 SYMLINK libspdk_ioat.so 00:02:56.524 SO libspdk_vfio_user.so.4.0 00:02:56.524 LIB libspdk_util.a 00:02:56.524 SYMLINK libspdk_vfio_user.so 00:02:56.783 SO libspdk_util.so.8.0 00:02:56.783 SYMLINK libspdk_util.so 00:02:56.783 LIB libspdk_trace_parser.a 00:02:56.783 SO libspdk_trace_parser.so.4.0 00:02:57.041 SYMLINK libspdk_trace_parser.so 00:02:57.041 CC lib/rdma/common.o 00:02:57.041 CC lib/json/json_parse.o 00:02:57.041 CC lib/rdma/rdma_verbs.o 00:02:57.041 CC lib/idxd/idxd_user.o 00:02:57.041 CC lib/json/json_util.o 00:02:57.041 CC lib/idxd/idxd.o 00:02:57.041 CC lib/conf/conf.o 00:02:57.041 CC lib/json/json_write.o 00:02:57.041 CC lib/idxd/idxd_kernel.o 00:02:57.041 CC lib/vmd/vmd.o 00:02:57.041 CC lib/env_dpdk/env.o 00:02:57.041 CC lib/vmd/led.o 00:02:57.041 CC lib/env_dpdk/memory.o 00:02:57.041 CC lib/env_dpdk/pci.o 00:02:57.041 CC lib/env_dpdk/init.o 00:02:57.041 CC lib/env_dpdk/threads.o 00:02:57.041 CC lib/env_dpdk/pci_ioat.o 00:02:57.041 CC lib/env_dpdk/pci_virtio.o 00:02:57.041 CC lib/env_dpdk/pci_vmd.o 00:02:57.041 CC lib/env_dpdk/pci_idxd.o 00:02:57.041 CC lib/env_dpdk/pci_event.o 00:02:57.041 CC lib/env_dpdk/pci_dpdk.o 00:02:57.041 CC lib/env_dpdk/sigbus_handler.o 00:02:57.041 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:57.041 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:57.300 LIB libspdk_conf.a 00:02:57.300 LIB libspdk_rdma.a 00:02:57.300 LIB libspdk_json.a 00:02:57.300 SO libspdk_conf.so.5.0 00:02:57.300 SO libspdk_rdma.so.5.0 00:02:57.300 SO libspdk_json.so.5.1 00:02:57.300 SYMLINK libspdk_conf.so 00:02:57.300 SYMLINK libspdk_rdma.so 00:02:57.300 SYMLINK libspdk_json.so 00:02:57.560 LIB libspdk_idxd.a 00:02:57.560 SO libspdk_idxd.so.11.0 00:02:57.560 LIB libspdk_vmd.a 00:02:57.560 SYMLINK libspdk_idxd.so 00:02:57.560 SO libspdk_vmd.so.5.0 00:02:57.560 CC lib/jsonrpc/jsonrpc_server.o 00:02:57.560 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:57.560 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.560 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:57.560 SYMLINK libspdk_vmd.so 00:02:57.820 LIB libspdk_jsonrpc.a 00:02:57.820 SO libspdk_jsonrpc.so.5.1 00:02:58.079 SYMLINK libspdk_jsonrpc.so 00:02:58.079 LIB libspdk_env_dpdk.a 00:02:58.079 SO libspdk_env_dpdk.so.13.0 00:02:58.338 SYMLINK libspdk_env_dpdk.so 00:02:58.338 CC lib/rpc/rpc.o 00:02:58.338 LIB libspdk_rpc.a 00:02:58.338 SO libspdk_rpc.so.5.0 00:02:58.606 SYMLINK libspdk_rpc.so 00:02:58.865 CC lib/notify/notify.o 00:02:58.865 CC lib/notify/notify_rpc.o 00:02:58.865 CC lib/trace/trace.o 00:02:58.865 CC lib/trace/trace_flags.o 00:02:58.865 CC lib/trace/trace_rpc.o 00:02:58.865 CC lib/sock/sock.o 00:02:58.865 CC lib/sock/sock_rpc.o 00:02:58.865 LIB libspdk_notify.a 00:02:58.865 SO libspdk_notify.so.5.0 00:02:58.865 LIB libspdk_trace.a 00:02:58.865 SO libspdk_trace.so.9.0 00:02:59.124 SYMLINK libspdk_notify.so 00:02:59.124 LIB libspdk_sock.a 00:02:59.124 SYMLINK libspdk_trace.so 00:02:59.124 SO libspdk_sock.so.8.0 00:02:59.124 SYMLINK libspdk_sock.so 00:02:59.384 CC lib/thread/thread.o 00:02:59.384 CC lib/thread/iobuf.o 00:02:59.384 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:59.384 CC lib/nvme/nvme_ctrlr.o 00:02:59.384 CC lib/nvme/nvme_fabric.o 00:02:59.384 CC lib/nvme/nvme_ns_cmd.o 00:02:59.384 CC lib/nvme/nvme_ns.o 00:02:59.384 CC lib/nvme/nvme_pcie_common.o 00:02:59.384 CC lib/nvme/nvme_pcie.o 00:02:59.384 CC lib/nvme/nvme_qpair.o 00:02:59.384 CC lib/nvme/nvme.o 00:02:59.384 CC lib/nvme/nvme_quirks.o 00:02:59.384 CC lib/nvme/nvme_transport.o 00:02:59.384 CC lib/nvme/nvme_discovery.o 00:02:59.384 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.384 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:59.384 CC lib/nvme/nvme_tcp.o 00:02:59.384 CC lib/nvme/nvme_opal.o 00:02:59.384 CC lib/nvme/nvme_io_msg.o 00:02:59.384 CC lib/nvme/nvme_poll_group.o 00:02:59.384 CC lib/nvme/nvme_zns.o 00:02:59.384 CC lib/nvme/nvme_cuse.o 00:02:59.384 CC lib/nvme/nvme_vfio_user.o 00:02:59.384 CC lib/nvme/nvme_rdma.o 00:03:00.321 LIB libspdk_thread.a 00:03:00.580 SO libspdk_thread.so.9.0 00:03:00.580 SYMLINK libspdk_thread.so 00:03:00.839 CC lib/accel/accel.o 00:03:00.839 CC lib/accel/accel_rpc.o 00:03:00.839 CC lib/init/json_config.o 00:03:00.839 CC lib/accel/accel_sw.o 00:03:00.839 CC lib/init/subsystem.o 00:03:00.839 CC lib/init/subsystem_rpc.o 00:03:00.839 CC lib/init/rpc.o 00:03:00.839 CC lib/virtio/virtio.o 00:03:00.839 CC lib/blob/blobstore.o 00:03:00.839 CC lib/blob/zeroes.o 00:03:00.839 CC lib/virtio/virtio_vhost_user.o 00:03:00.839 CC lib/virtio/virtio_vfio_user.o 00:03:00.839 CC lib/blob/request.o 00:03:00.839 CC lib/virtio/virtio_pci.o 00:03:00.839 CC lib/blob/blob_bs_dev.o 00:03:00.839 LIB libspdk_nvme.a 00:03:00.839 LIB libspdk_init.a 00:03:01.098 SO libspdk_init.so.4.0 00:03:01.098 SO libspdk_nvme.so.12.0 00:03:01.098 LIB libspdk_virtio.a 00:03:01.098 SYMLINK libspdk_init.so 00:03:01.098 SO libspdk_virtio.so.6.0 00:03:01.098 SYMLINK libspdk_virtio.so 00:03:01.357 SYMLINK libspdk_nvme.so 00:03:01.357 CC lib/event/app.o 00:03:01.357 CC lib/event/reactor.o 00:03:01.357 CC lib/event/log_rpc.o 00:03:01.357 CC lib/event/app_rpc.o 00:03:01.357 CC lib/event/scheduler_static.o 00:03:01.617 LIB libspdk_accel.a 00:03:01.617 SO libspdk_accel.so.14.0 00:03:01.617 SYMLINK libspdk_accel.so 00:03:01.617 LIB libspdk_event.a 00:03:01.617 SO libspdk_event.so.12.0 00:03:01.875 SYMLINK libspdk_event.so 00:03:01.875 CC lib/bdev/bdev.o 00:03:01.875 CC lib/bdev/bdev_rpc.o 00:03:01.875 CC lib/bdev/bdev_zone.o 00:03:01.875 CC lib/bdev/part.o 00:03:01.875 CC lib/bdev/scsi_nvme.o 00:03:02.813 LIB libspdk_blob.a 00:03:02.813 SO libspdk_blob.so.10.1 00:03:02.813 SYMLINK libspdk_blob.so 00:03:03.073 CC lib/blobfs/tree.o 00:03:03.073 CC lib/blobfs/blobfs.o 00:03:03.073 CC lib/lvol/lvol.o 00:03:03.640 LIB libspdk_bdev.a 00:03:03.640 LIB libspdk_blobfs.a 00:03:03.640 SO libspdk_bdev.so.14.0 00:03:03.640 SO libspdk_blobfs.so.9.0 00:03:03.640 LIB libspdk_lvol.a 00:03:03.640 SO libspdk_lvol.so.9.1 00:03:03.900 SYMLINK libspdk_bdev.so 00:03:03.900 SYMLINK libspdk_blobfs.so 00:03:03.900 SYMLINK libspdk_lvol.so 00:03:03.900 CC lib/scsi/dev.o 00:03:03.900 CC lib/scsi/lun.o 00:03:03.900 CC lib/scsi/port.o 00:03:03.900 CC lib/nvmf/ctrlr.o 00:03:03.900 CC lib/ublk/ublk.o 00:03:03.900 CC lib/ftl/ftl_core.o 00:03:03.900 CC lib/scsi/scsi.o 00:03:03.900 CC lib/nvmf/ctrlr_discovery.o 00:03:03.900 CC lib/nbd/nbd.o 00:03:03.900 CC lib/ftl/ftl_init.o 00:03:03.901 CC lib/ublk/ublk_rpc.o 00:03:03.901 CC lib/scsi/scsi_bdev.o 00:03:03.901 CC lib/nvmf/ctrlr_bdev.o 00:03:03.901 CC lib/nbd/nbd_rpc.o 00:03:03.901 CC lib/scsi/scsi_pr.o 00:03:03.901 CC lib/ftl/ftl_layout.o 00:03:03.901 CC lib/nvmf/subsystem.o 00:03:03.901 CC lib/scsi/scsi_rpc.o 00:03:03.901 CC lib/ftl/ftl_debug.o 00:03:03.901 CC lib/nvmf/nvmf.o 00:03:03.901 CC lib/scsi/task.o 00:03:03.901 CC lib/ftl/ftl_io.o 00:03:03.901 CC lib/nvmf/nvmf_rpc.o 00:03:03.901 CC lib/ftl/ftl_sb.o 00:03:03.901 CC lib/nvmf/transport.o 00:03:03.901 CC lib/ftl/ftl_l2p.o 00:03:03.901 CC lib/nvmf/tcp.o 00:03:03.901 CC lib/ftl/ftl_l2p_flat.o 00:03:03.901 CC lib/nvmf/rdma.o 00:03:03.901 CC lib/ftl/ftl_nv_cache.o 00:03:03.901 CC lib/ftl/ftl_band.o 00:03:03.901 CC lib/ftl/ftl_writer.o 00:03:03.901 CC lib/ftl/ftl_band_ops.o 00:03:03.901 CC lib/ftl/ftl_rq.o 00:03:03.901 CC lib/ftl/ftl_reloc.o 00:03:03.901 CC lib/ftl/ftl_l2p_cache.o 00:03:03.901 CC lib/ftl/ftl_p2l.o 00:03:03.901 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:04.159 CC lib/ftl/mngt/ftl_mngt.o 00:03:04.159 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:04.159 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:04.159 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:04.159 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:04.159 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:04.159 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:04.159 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:04.159 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:04.159 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:04.159 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:04.159 CC lib/ftl/utils/ftl_conf.o 00:03:04.159 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:04.159 CC lib/ftl/utils/ftl_md.o 00:03:04.159 CC lib/ftl/utils/ftl_mempool.o 00:03:04.159 CC lib/ftl/utils/ftl_bitmap.o 00:03:04.159 CC lib/ftl/utils/ftl_property.o 00:03:04.159 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:04.159 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:04.159 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:04.159 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:04.159 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:04.159 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:04.159 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:04.159 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:04.159 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:04.159 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:04.159 CC lib/ftl/base/ftl_base_dev.o 00:03:04.159 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.159 CC lib/ftl/ftl_trace.o 00:03:04.418 LIB libspdk_nbd.a 00:03:04.418 SO libspdk_nbd.so.6.0 00:03:04.418 LIB libspdk_scsi.a 00:03:04.676 SYMLINK libspdk_nbd.so 00:03:04.676 SO libspdk_scsi.so.8.0 00:03:04.676 LIB libspdk_ublk.a 00:03:04.676 SO libspdk_ublk.so.2.0 00:03:04.676 SYMLINK libspdk_scsi.so 00:03:04.676 SYMLINK libspdk_ublk.so 00:03:04.934 LIB libspdk_ftl.a 00:03:04.934 CC lib/vhost/vhost_scsi.o 00:03:04.934 CC lib/vhost/vhost.o 00:03:04.934 CC lib/vhost/vhost_rpc.o 00:03:04.934 CC lib/vhost/rte_vhost_user.o 00:03:04.934 CC lib/vhost/vhost_blk.o 00:03:04.934 CC lib/iscsi/init_grp.o 00:03:04.934 CC lib/iscsi/conn.o 00:03:04.934 CC lib/iscsi/iscsi.o 00:03:04.934 CC lib/iscsi/md5.o 00:03:04.934 CC lib/iscsi/param.o 00:03:04.934 CC lib/iscsi/portal_grp.o 00:03:04.934 CC lib/iscsi/tgt_node.o 00:03:04.934 CC lib/iscsi/iscsi_subsystem.o 00:03:04.934 CC lib/iscsi/iscsi_rpc.o 00:03:04.934 CC lib/iscsi/task.o 00:03:04.934 SO libspdk_ftl.so.8.0 00:03:05.193 SYMLINK libspdk_ftl.so 00:03:05.761 LIB libspdk_nvmf.a 00:03:05.761 LIB libspdk_vhost.a 00:03:05.761 SO libspdk_nvmf.so.17.0 00:03:05.761 SO libspdk_vhost.so.7.1 00:03:05.761 SYMLINK libspdk_vhost.so 00:03:05.761 SYMLINK libspdk_nvmf.so 00:03:05.761 LIB libspdk_iscsi.a 00:03:06.020 SO libspdk_iscsi.so.7.0 00:03:06.020 SYMLINK libspdk_iscsi.so 00:03:06.589 CC module/env_dpdk/env_dpdk_rpc.o 00:03:06.589 CC module/accel/error/accel_error.o 00:03:06.589 CC module/blob/bdev/blob_bdev.o 00:03:06.589 CC module/accel/error/accel_error_rpc.o 00:03:06.589 CC module/sock/posix/posix.o 00:03:06.589 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:06.589 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:06.589 CC module/scheduler/gscheduler/gscheduler.o 00:03:06.589 CC module/accel/ioat/accel_ioat.o 00:03:06.589 CC module/accel/dsa/accel_dsa.o 00:03:06.589 CC module/accel/iaa/accel_iaa.o 00:03:06.589 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.589 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.589 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.589 LIB libspdk_env_dpdk_rpc.a 00:03:06.589 SO libspdk_env_dpdk_rpc.so.5.0 00:03:06.589 SYMLINK libspdk_env_dpdk_rpc.so 00:03:06.848 LIB libspdk_scheduler_gscheduler.a 00:03:06.848 LIB libspdk_accel_error.a 00:03:06.848 LIB libspdk_scheduler_dpdk_governor.a 00:03:06.848 LIB libspdk_accel_ioat.a 00:03:06.848 LIB libspdk_scheduler_dynamic.a 00:03:06.848 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:06.848 LIB libspdk_accel_iaa.a 00:03:06.848 SO libspdk_scheduler_gscheduler.so.3.0 00:03:06.848 SO libspdk_accel_error.so.1.0 00:03:06.848 LIB libspdk_accel_dsa.a 00:03:06.848 SO libspdk_accel_ioat.so.5.0 00:03:06.848 SO libspdk_scheduler_dynamic.so.3.0 00:03:06.848 LIB libspdk_blob_bdev.a 00:03:06.848 SO libspdk_accel_iaa.so.2.0 00:03:06.848 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:06.848 SYMLINK libspdk_scheduler_gscheduler.so 00:03:06.848 SO libspdk_blob_bdev.so.10.1 00:03:06.848 SO libspdk_accel_dsa.so.4.0 00:03:06.848 SYMLINK libspdk_accel_error.so 00:03:06.848 SYMLINK libspdk_accel_ioat.so 00:03:06.848 SYMLINK libspdk_scheduler_dynamic.so 00:03:06.848 SYMLINK libspdk_accel_iaa.so 00:03:06.848 SYMLINK libspdk_blob_bdev.so 00:03:06.848 SYMLINK libspdk_accel_dsa.so 00:03:07.108 LIB libspdk_sock_posix.a 00:03:07.108 SO libspdk_sock_posix.so.5.0 00:03:07.366 SYMLINK libspdk_sock_posix.so 00:03:07.366 CC module/bdev/lvol/vbdev_lvol.o 00:03:07.366 CC module/bdev/gpt/gpt.o 00:03:07.366 CC module/bdev/gpt/vbdev_gpt.o 00:03:07.366 CC module/bdev/raid/bdev_raid.o 00:03:07.366 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:07.366 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.366 CC module/bdev/delay/vbdev_delay.o 00:03:07.366 CC module/bdev/raid/bdev_raid_sb.o 00:03:07.366 CC module/bdev/error/vbdev_error.o 00:03:07.366 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:07.366 CC module/bdev/raid/raid0.o 00:03:07.366 CC module/bdev/iscsi/bdev_iscsi.o 00:03:07.366 CC module/bdev/error/vbdev_error_rpc.o 00:03:07.366 CC module/bdev/ftl/bdev_ftl.o 00:03:07.366 CC module/bdev/raid/raid1.o 00:03:07.366 CC module/bdev/aio/bdev_aio.o 00:03:07.366 CC module/bdev/nvme/nvme_rpc.o 00:03:07.366 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:07.366 CC module/bdev/raid/concat.o 00:03:07.366 CC module/bdev/aio/bdev_aio_rpc.o 00:03:07.366 CC module/bdev/nvme/bdev_nvme.o 00:03:07.366 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:07.366 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.366 CC module/bdev/malloc/bdev_malloc.o 00:03:07.366 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:07.366 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.366 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.366 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:07.366 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.366 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.366 CC module/bdev/nvme/vbdev_opal.o 00:03:07.366 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:07.366 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:07.366 CC module/bdev/split/vbdev_split.o 00:03:07.366 CC module/bdev/split/vbdev_split_rpc.o 00:03:07.366 CC module/bdev/null/bdev_null.o 00:03:07.366 CC module/blobfs/bdev/blobfs_bdev.o 00:03:07.366 CC module/bdev/null/bdev_null_rpc.o 00:03:07.366 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:07.366 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.366 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.366 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.625 LIB libspdk_blobfs_bdev.a 00:03:07.625 LIB libspdk_bdev_split.a 00:03:07.625 SO libspdk_blobfs_bdev.so.5.0 00:03:07.625 SO libspdk_bdev_split.so.5.0 00:03:07.625 LIB libspdk_bdev_error.a 00:03:07.625 LIB libspdk_bdev_gpt.a 00:03:07.625 LIB libspdk_bdev_aio.a 00:03:07.625 LIB libspdk_bdev_null.a 00:03:07.625 LIB libspdk_bdev_passthru.a 00:03:07.625 LIB libspdk_bdev_ftl.a 00:03:07.625 SO libspdk_bdev_error.so.5.0 00:03:07.625 SYMLINK libspdk_blobfs_bdev.so 00:03:07.625 SO libspdk_bdev_aio.so.5.0 00:03:07.625 SO libspdk_bdev_gpt.so.5.0 00:03:07.625 SO libspdk_bdev_null.so.5.0 00:03:07.625 SO libspdk_bdev_passthru.so.5.0 00:03:07.625 SO libspdk_bdev_ftl.so.5.0 00:03:07.625 SYMLINK libspdk_bdev_split.so 00:03:07.625 LIB libspdk_bdev_zone_block.a 00:03:07.625 LIB libspdk_bdev_delay.a 00:03:07.625 LIB libspdk_bdev_iscsi.a 00:03:07.625 LIB libspdk_bdev_malloc.a 00:03:07.625 SYMLINK libspdk_bdev_error.so 00:03:07.625 SYMLINK libspdk_bdev_aio.so 00:03:07.625 SO libspdk_bdev_zone_block.so.5.0 00:03:07.625 SO libspdk_bdev_delay.so.5.0 00:03:07.625 SYMLINK libspdk_bdev_gpt.so 00:03:07.625 SYMLINK libspdk_bdev_passthru.so 00:03:07.625 SYMLINK libspdk_bdev_null.so 00:03:07.625 SYMLINK libspdk_bdev_ftl.so 00:03:07.625 SO libspdk_bdev_iscsi.so.5.0 00:03:07.625 SO libspdk_bdev_malloc.so.5.0 00:03:07.625 LIB libspdk_bdev_lvol.a 00:03:07.625 SYMLINK libspdk_bdev_zone_block.so 00:03:07.625 SYMLINK libspdk_bdev_delay.so 00:03:07.625 LIB libspdk_bdev_virtio.a 00:03:07.625 SYMLINK libspdk_bdev_iscsi.so 00:03:07.884 SYMLINK libspdk_bdev_malloc.so 00:03:07.884 SO libspdk_bdev_lvol.so.5.0 00:03:07.884 SO libspdk_bdev_virtio.so.5.0 00:03:07.884 SYMLINK libspdk_bdev_lvol.so 00:03:07.884 SYMLINK libspdk_bdev_virtio.so 00:03:07.884 LIB libspdk_bdev_raid.a 00:03:08.144 SO libspdk_bdev_raid.so.5.0 00:03:08.144 SYMLINK libspdk_bdev_raid.so 00:03:08.718 LIB libspdk_bdev_nvme.a 00:03:09.045 SO libspdk_bdev_nvme.so.6.0 00:03:09.045 SYMLINK libspdk_bdev_nvme.so 00:03:09.614 CC module/event/subsystems/scheduler/scheduler.o 00:03:09.614 CC module/event/subsystems/iobuf/iobuf.o 00:03:09.614 CC module/event/subsystems/vmd/vmd.o 00:03:09.614 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:09.614 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:09.614 CC module/event/subsystems/sock/sock.o 00:03:09.614 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:09.614 LIB libspdk_event_vhost_blk.a 00:03:09.614 LIB libspdk_event_sock.a 00:03:09.614 LIB libspdk_event_scheduler.a 00:03:09.614 LIB libspdk_event_vmd.a 00:03:09.614 LIB libspdk_event_iobuf.a 00:03:09.614 SO libspdk_event_scheduler.so.3.0 00:03:09.614 SO libspdk_event_vhost_blk.so.2.0 00:03:09.614 SO libspdk_event_sock.so.4.0 00:03:09.614 SO libspdk_event_vmd.so.5.0 00:03:09.614 SO libspdk_event_iobuf.so.2.0 00:03:09.614 SYMLINK libspdk_event_vhost_blk.so 00:03:09.614 SYMLINK libspdk_event_scheduler.so 00:03:09.614 SYMLINK libspdk_event_sock.so 00:03:09.614 SYMLINK libspdk_event_vmd.so 00:03:09.614 SYMLINK libspdk_event_iobuf.so 00:03:09.874 CC module/event/subsystems/accel/accel.o 00:03:10.133 LIB libspdk_event_accel.a 00:03:10.133 SO libspdk_event_accel.so.5.0 00:03:10.133 SYMLINK libspdk_event_accel.so 00:03:10.393 CC module/event/subsystems/bdev/bdev.o 00:03:10.652 LIB libspdk_event_bdev.a 00:03:10.652 SO libspdk_event_bdev.so.5.0 00:03:10.652 SYMLINK libspdk_event_bdev.so 00:03:10.911 CC module/event/subsystems/scsi/scsi.o 00:03:10.911 CC module/event/subsystems/ublk/ublk.o 00:03:10.911 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:10.911 CC module/event/subsystems/nbd/nbd.o 00:03:10.911 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:11.170 LIB libspdk_event_nbd.a 00:03:11.170 LIB libspdk_event_ublk.a 00:03:11.170 LIB libspdk_event_scsi.a 00:03:11.170 SO libspdk_event_nbd.so.5.0 00:03:11.170 SO libspdk_event_ublk.so.2.0 00:03:11.170 SO libspdk_event_scsi.so.5.0 00:03:11.170 LIB libspdk_event_nvmf.a 00:03:11.170 SYMLINK libspdk_event_nbd.so 00:03:11.170 SYMLINK libspdk_event_ublk.so 00:03:11.170 SO libspdk_event_nvmf.so.5.0 00:03:11.170 SYMLINK libspdk_event_scsi.so 00:03:11.170 SYMLINK libspdk_event_nvmf.so 00:03:11.429 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:11.429 CC module/event/subsystems/iscsi/iscsi.o 00:03:11.689 LIB libspdk_event_vhost_scsi.a 00:03:11.689 LIB libspdk_event_iscsi.a 00:03:11.689 SO libspdk_event_vhost_scsi.so.2.0 00:03:11.689 SO libspdk_event_iscsi.so.5.0 00:03:11.689 SYMLINK libspdk_event_iscsi.so 00:03:11.689 SYMLINK libspdk_event_vhost_scsi.so 00:03:11.948 SO libspdk.so.5.0 00:03:11.948 SYMLINK libspdk.so 00:03:12.214 CC app/trace_record/trace_record.o 00:03:12.214 TEST_HEADER include/spdk/accel.h 00:03:12.214 TEST_HEADER include/spdk/accel_module.h 00:03:12.214 TEST_HEADER include/spdk/assert.h 00:03:12.214 TEST_HEADER include/spdk/barrier.h 00:03:12.214 TEST_HEADER include/spdk/base64.h 00:03:12.214 TEST_HEADER include/spdk/bdev_module.h 00:03:12.214 TEST_HEADER include/spdk/bdev.h 00:03:12.215 TEST_HEADER include/spdk/bdev_zone.h 00:03:12.215 CC test/rpc_client/rpc_client_test.o 00:03:12.215 CC app/spdk_top/spdk_top.o 00:03:12.215 TEST_HEADER include/spdk/bit_array.h 00:03:12.215 TEST_HEADER include/spdk/bit_pool.h 00:03:12.215 CC app/spdk_nvme_discover/discovery_aer.o 00:03:12.215 CXX app/trace/trace.o 00:03:12.215 TEST_HEADER include/spdk/blob_bdev.h 00:03:12.215 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:12.215 TEST_HEADER include/spdk/blobfs.h 00:03:12.215 TEST_HEADER include/spdk/blob.h 00:03:12.215 TEST_HEADER include/spdk/conf.h 00:03:12.215 CC app/spdk_nvme_perf/perf.o 00:03:12.215 TEST_HEADER include/spdk/cpuset.h 00:03:12.215 TEST_HEADER include/spdk/config.h 00:03:12.215 CC app/spdk_nvme_identify/identify.o 00:03:12.215 TEST_HEADER include/spdk/crc16.h 00:03:12.215 TEST_HEADER include/spdk/crc32.h 00:03:12.215 TEST_HEADER include/spdk/crc64.h 00:03:12.215 TEST_HEADER include/spdk/dif.h 00:03:12.215 TEST_HEADER include/spdk/dma.h 00:03:12.215 TEST_HEADER include/spdk/endian.h 00:03:12.215 TEST_HEADER include/spdk/env_dpdk.h 00:03:12.215 TEST_HEADER include/spdk/env.h 00:03:12.215 TEST_HEADER include/spdk/event.h 00:03:12.215 TEST_HEADER include/spdk/fd_group.h 00:03:12.215 TEST_HEADER include/spdk/fd.h 00:03:12.215 TEST_HEADER include/spdk/file.h 00:03:12.215 TEST_HEADER include/spdk/ftl.h 00:03:12.215 CC app/spdk_dd/spdk_dd.o 00:03:12.215 CC app/spdk_lspci/spdk_lspci.o 00:03:12.215 TEST_HEADER include/spdk/gpt_spec.h 00:03:12.215 TEST_HEADER include/spdk/hexlify.h 00:03:12.215 TEST_HEADER include/spdk/histogram_data.h 00:03:12.215 TEST_HEADER include/spdk/idxd.h 00:03:12.215 TEST_HEADER include/spdk/idxd_spec.h 00:03:12.215 TEST_HEADER include/spdk/init.h 00:03:12.215 TEST_HEADER include/spdk/ioat_spec.h 00:03:12.215 TEST_HEADER include/spdk/iscsi_spec.h 00:03:12.215 TEST_HEADER include/spdk/ioat.h 00:03:12.215 TEST_HEADER include/spdk/json.h 00:03:12.215 TEST_HEADER include/spdk/jsonrpc.h 00:03:12.215 TEST_HEADER include/spdk/likely.h 00:03:12.215 TEST_HEADER include/spdk/log.h 00:03:12.215 TEST_HEADER include/spdk/lvol.h 00:03:12.215 TEST_HEADER include/spdk/memory.h 00:03:12.215 TEST_HEADER include/spdk/mmio.h 00:03:12.215 TEST_HEADER include/spdk/nbd.h 00:03:12.215 TEST_HEADER include/spdk/notify.h 00:03:12.215 TEST_HEADER include/spdk/nvme.h 00:03:12.215 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:12.215 TEST_HEADER include/spdk/nvme_intel.h 00:03:12.215 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:12.215 TEST_HEADER include/spdk/nvme_spec.h 00:03:12.215 TEST_HEADER include/spdk/nvme_zns.h 00:03:12.215 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:12.215 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:12.215 TEST_HEADER include/spdk/nvmf_spec.h 00:03:12.215 TEST_HEADER include/spdk/nvmf.h 00:03:12.215 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:12.215 TEST_HEADER include/spdk/nvmf_transport.h 00:03:12.215 TEST_HEADER include/spdk/opal.h 00:03:12.215 TEST_HEADER include/spdk/opal_spec.h 00:03:12.215 TEST_HEADER include/spdk/pci_ids.h 00:03:12.215 TEST_HEADER include/spdk/pipe.h 00:03:12.215 TEST_HEADER include/spdk/queue.h 00:03:12.215 TEST_HEADER include/spdk/reduce.h 00:03:12.215 TEST_HEADER include/spdk/rpc.h 00:03:12.215 TEST_HEADER include/spdk/scsi.h 00:03:12.215 TEST_HEADER include/spdk/scheduler.h 00:03:12.215 TEST_HEADER include/spdk/scsi_spec.h 00:03:12.215 TEST_HEADER include/spdk/sock.h 00:03:12.215 TEST_HEADER include/spdk/stdinc.h 00:03:12.215 TEST_HEADER include/spdk/string.h 00:03:12.215 TEST_HEADER include/spdk/trace.h 00:03:12.215 TEST_HEADER include/spdk/thread.h 00:03:12.215 TEST_HEADER include/spdk/trace_parser.h 00:03:12.215 TEST_HEADER include/spdk/tree.h 00:03:12.215 TEST_HEADER include/spdk/ublk.h 00:03:12.215 TEST_HEADER include/spdk/util.h 00:03:12.215 TEST_HEADER include/spdk/uuid.h 00:03:12.215 CC app/vhost/vhost.o 00:03:12.215 CC app/nvmf_tgt/nvmf_main.o 00:03:12.215 TEST_HEADER include/spdk/version.h 00:03:12.215 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.215 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:12.215 TEST_HEADER include/spdk/vhost.h 00:03:12.215 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:12.215 TEST_HEADER include/spdk/vmd.h 00:03:12.215 CC app/spdk_tgt/spdk_tgt.o 00:03:12.215 TEST_HEADER include/spdk/zipf.h 00:03:12.215 TEST_HEADER include/spdk/xor.h 00:03:12.215 CXX test/cpp_headers/accel.o 00:03:12.215 CXX test/cpp_headers/accel_module.o 00:03:12.215 CXX test/cpp_headers/assert.o 00:03:12.215 CXX test/cpp_headers/barrier.o 00:03:12.215 CXX test/cpp_headers/base64.o 00:03:12.215 CXX test/cpp_headers/bdev.o 00:03:12.215 CXX test/cpp_headers/bdev_module.o 00:03:12.215 CXX test/cpp_headers/bdev_zone.o 00:03:12.215 CXX test/cpp_headers/bit_array.o 00:03:12.215 CXX test/cpp_headers/bit_pool.o 00:03:12.215 CXX test/cpp_headers/blob_bdev.o 00:03:12.215 CXX test/cpp_headers/blobfs_bdev.o 00:03:12.215 CXX test/cpp_headers/blob.o 00:03:12.215 CXX test/cpp_headers/blobfs.o 00:03:12.215 CXX test/cpp_headers/conf.o 00:03:12.215 CXX test/cpp_headers/config.o 00:03:12.215 CXX test/cpp_headers/cpuset.o 00:03:12.215 CXX test/cpp_headers/crc32.o 00:03:12.215 CXX test/cpp_headers/crc16.o 00:03:12.215 CXX test/cpp_headers/crc64.o 00:03:12.215 CXX test/cpp_headers/dma.o 00:03:12.215 CXX test/cpp_headers/dif.o 00:03:12.215 CXX test/cpp_headers/endian.o 00:03:12.215 CXX test/cpp_headers/env_dpdk.o 00:03:12.215 CXX test/cpp_headers/event.o 00:03:12.215 CXX test/cpp_headers/env.o 00:03:12.215 CXX test/cpp_headers/fd_group.o 00:03:12.215 CXX test/cpp_headers/file.o 00:03:12.215 CXX test/cpp_headers/fd.o 00:03:12.215 CXX test/cpp_headers/ftl.o 00:03:12.215 CXX test/cpp_headers/gpt_spec.o 00:03:12.215 CXX test/cpp_headers/hexlify.o 00:03:12.215 CXX test/cpp_headers/idxd.o 00:03:12.215 CXX test/cpp_headers/histogram_data.o 00:03:12.215 CXX test/cpp_headers/idxd_spec.o 00:03:12.215 CXX test/cpp_headers/init.o 00:03:12.215 CXX test/cpp_headers/ioat.o 00:03:12.215 CC examples/ioat/verify/verify.o 00:03:12.215 CC examples/nvme/hotplug/hotplug.o 00:03:12.215 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:12.215 CC examples/nvme/hello_world/hello_world.o 00:03:12.215 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:12.215 CC examples/nvme/abort/abort.o 00:03:12.215 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:12.215 CC examples/nvme/arbitration/arbitration.o 00:03:12.215 CC test/thread/poller_perf/poller_perf.o 00:03:12.215 CC app/fio/nvme/fio_plugin.o 00:03:12.215 CC examples/nvme/reconnect/reconnect.o 00:03:12.215 CC test/app/jsoncat/jsoncat.o 00:03:12.215 CC test/nvme/reset/reset.o 00:03:12.215 CC test/nvme/e2edp/nvme_dp.o 00:03:12.215 CC test/event/reactor/reactor.o 00:03:12.215 CC examples/ioat/perf/perf.o 00:03:12.215 CC test/event/reactor_perf/reactor_perf.o 00:03:12.215 CC test/event/event_perf/event_perf.o 00:03:12.215 CC test/nvme/simple_copy/simple_copy.o 00:03:12.215 CC examples/accel/perf/accel_perf.o 00:03:12.215 CC examples/idxd/perf/perf.o 00:03:12.215 CC examples/vmd/led/led.o 00:03:12.215 CC test/nvme/sgl/sgl.o 00:03:12.215 CC test/nvme/connect_stress/connect_stress.o 00:03:12.215 CC test/nvme/reserve/reserve.o 00:03:12.215 CC test/nvme/overhead/overhead.o 00:03:12.215 CC test/nvme/aer/aer.o 00:03:12.215 CC test/app/histogram_perf/histogram_perf.o 00:03:12.215 CC test/nvme/cuse/cuse.o 00:03:12.215 CC test/nvme/boot_partition/boot_partition.o 00:03:12.215 CC test/nvme/fdp/fdp.o 00:03:12.215 CC test/nvme/err_injection/err_injection.o 00:03:12.215 CC test/env/memory/memory_ut.o 00:03:12.215 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:12.215 CC test/env/vtophys/vtophys.o 00:03:12.215 CC examples/vmd/lsvmd/lsvmd.o 00:03:12.215 CC test/nvme/startup/startup.o 00:03:12.215 CC test/env/pci/pci_ut.o 00:03:12.215 CC test/app/stub/stub.o 00:03:12.215 CC examples/util/zipf/zipf.o 00:03:12.215 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:12.215 CC test/nvme/compliance/nvme_compliance.o 00:03:12.215 CC test/bdev/bdevio/bdevio.o 00:03:12.215 CC examples/sock/hello_world/hello_sock.o 00:03:12.215 CC test/nvme/fused_ordering/fused_ordering.o 00:03:12.215 CC test/event/app_repeat/app_repeat.o 00:03:12.482 CC test/blobfs/mkfs/mkfs.o 00:03:12.482 CC examples/blob/hello_world/hello_blob.o 00:03:12.482 CC test/dma/test_dma/test_dma.o 00:03:12.482 CC examples/nvmf/nvmf/nvmf.o 00:03:12.482 CC test/app/bdev_svc/bdev_svc.o 00:03:12.482 CC examples/thread/thread/thread_ex.o 00:03:12.482 CC test/event/scheduler/scheduler.o 00:03:12.482 CC app/fio/bdev/fio_plugin.o 00:03:12.482 CC examples/bdev/bdevperf/bdevperf.o 00:03:12.482 CC test/accel/dif/dif.o 00:03:12.482 CC examples/bdev/hello_world/hello_bdev.o 00:03:12.482 CC examples/blob/cli/blobcli.o 00:03:12.482 CC test/lvol/esnap/esnap.o 00:03:12.482 CC test/env/mem_callbacks/mem_callbacks.o 00:03:12.482 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:12.482 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:12.750 LINK spdk_lspci 00:03:12.750 LINK spdk_nvme_discover 00:03:12.750 LINK rpc_client_test 00:03:12.750 LINK interrupt_tgt 00:03:12.750 LINK jsoncat 00:03:12.750 LINK led 00:03:12.750 LINK nvmf_tgt 00:03:12.750 LINK reactor 00:03:12.750 LINK event_perf 00:03:12.750 LINK histogram_perf 00:03:12.750 LINK poller_perf 00:03:12.750 LINK reactor_perf 00:03:12.750 LINK vhost 00:03:12.750 LINK app_repeat 00:03:12.750 LINK zipf 00:03:12.750 LINK env_dpdk_post_init 00:03:12.750 LINK iscsi_tgt 00:03:12.750 LINK lsvmd 00:03:12.750 LINK spdk_trace_record 00:03:12.750 LINK vtophys 00:03:12.750 LINK pmr_persistence 00:03:12.750 LINK startup 00:03:12.750 LINK boot_partition 00:03:12.750 LINK cmb_copy 00:03:12.750 CXX test/cpp_headers/ioat_spec.o 00:03:12.750 LINK spdk_tgt 00:03:12.750 LINK stub 00:03:12.750 LINK err_injection 00:03:12.750 LINK doorbell_aers 00:03:13.011 LINK bdev_svc 00:03:13.011 LINK connect_stress 00:03:13.011 LINK reserve 00:03:13.011 CXX test/cpp_headers/iscsi_spec.o 00:03:13.011 CXX test/cpp_headers/json.o 00:03:13.011 LINK hello_world 00:03:13.011 CXX test/cpp_headers/jsonrpc.o 00:03:13.011 CXX test/cpp_headers/likely.o 00:03:13.011 CXX test/cpp_headers/log.o 00:03:13.011 CXX test/cpp_headers/lvol.o 00:03:13.011 CXX test/cpp_headers/memory.o 00:03:13.011 LINK mkfs 00:03:13.011 CXX test/cpp_headers/mmio.o 00:03:13.011 CXX test/cpp_headers/nbd.o 00:03:13.011 CXX test/cpp_headers/notify.o 00:03:13.011 LINK verify 00:03:13.011 CXX test/cpp_headers/nvme.o 00:03:13.011 CXX test/cpp_headers/nvme_intel.o 00:03:13.011 CXX test/cpp_headers/nvme_ocssd.o 00:03:13.011 LINK hotplug 00:03:13.011 LINK fused_ordering 00:03:13.011 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:13.011 CXX test/cpp_headers/nvme_spec.o 00:03:13.011 CXX test/cpp_headers/nvme_zns.o 00:03:13.011 CXX test/cpp_headers/nvmf_cmd.o 00:03:13.011 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:13.011 CXX test/cpp_headers/nvmf.o 00:03:13.011 CXX test/cpp_headers/nvmf_spec.o 00:03:13.011 CXX test/cpp_headers/nvmf_transport.o 00:03:13.011 CXX test/cpp_headers/opal.o 00:03:13.011 CXX test/cpp_headers/opal_spec.o 00:03:13.011 CXX test/cpp_headers/pci_ids.o 00:03:13.011 LINK simple_copy 00:03:13.011 LINK ioat_perf 00:03:13.011 LINK hello_blob 00:03:13.011 CXX test/cpp_headers/pipe.o 00:03:13.012 CXX test/cpp_headers/queue.o 00:03:13.012 CXX test/cpp_headers/reduce.o 00:03:13.012 CXX test/cpp_headers/rpc.o 00:03:13.012 CXX test/cpp_headers/scheduler.o 00:03:13.012 CXX test/cpp_headers/scsi.o 00:03:13.012 LINK hello_sock 00:03:13.012 CXX test/cpp_headers/scsi_spec.o 00:03:13.012 CXX test/cpp_headers/sock.o 00:03:13.012 CXX test/cpp_headers/stdinc.o 00:03:13.012 LINK scheduler 00:03:13.012 LINK spdk_dd 00:03:13.012 CXX test/cpp_headers/string.o 00:03:13.012 CXX test/cpp_headers/thread.o 00:03:13.012 LINK thread 00:03:13.012 LINK sgl 00:03:13.012 LINK reset 00:03:13.012 CXX test/cpp_headers/trace.o 00:03:13.012 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:13.012 LINK nvme_dp 00:03:13.012 LINK hello_bdev 00:03:13.012 LINK aer 00:03:13.012 LINK nvmf 00:03:13.012 LINK mem_callbacks 00:03:13.012 LINK overhead 00:03:13.012 LINK fdp 00:03:13.012 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:13.012 LINK arbitration 00:03:13.012 CXX test/cpp_headers/trace_parser.o 00:03:13.012 LINK reconnect 00:03:13.012 CXX test/cpp_headers/tree.o 00:03:13.270 CXX test/cpp_headers/ublk.o 00:03:13.271 LINK nvme_compliance 00:03:13.271 LINK idxd_perf 00:03:13.271 CXX test/cpp_headers/util.o 00:03:13.271 CXX test/cpp_headers/uuid.o 00:03:13.271 CXX test/cpp_headers/vfio_user_pci.o 00:03:13.271 CXX test/cpp_headers/version.o 00:03:13.271 LINK bdevio 00:03:13.271 CXX test/cpp_headers/vfio_user_spec.o 00:03:13.271 CXX test/cpp_headers/vhost.o 00:03:13.271 CXX test/cpp_headers/vmd.o 00:03:13.271 LINK test_dma 00:03:13.271 CXX test/cpp_headers/xor.o 00:03:13.271 CXX test/cpp_headers/zipf.o 00:03:13.271 LINK abort 00:03:13.271 LINK dif 00:03:13.271 LINK spdk_trace 00:03:13.271 LINK pci_ut 00:03:13.271 LINK accel_perf 00:03:13.271 LINK spdk_bdev 00:03:13.271 LINK nvme_manage 00:03:13.271 LINK spdk_nvme 00:03:13.529 LINK nvme_fuzz 00:03:13.529 LINK memory_ut 00:03:13.529 LINK blobcli 00:03:13.529 LINK spdk_top 00:03:13.529 LINK spdk_nvme_perf 00:03:13.529 LINK spdk_nvme_identify 00:03:13.788 LINK vhost_fuzz 00:03:13.788 LINK bdevperf 00:03:13.788 LINK cuse 00:03:14.356 LINK iscsi_fuzz 00:03:16.262 LINK esnap 00:03:16.523 00:03:16.523 real 0m30.544s 00:03:16.523 user 4m51.008s 00:03:16.523 sys 2m35.288s 00:03:16.523 17:06:12 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:16.523 17:06:12 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.523 ************************************ 00:03:16.523 END TEST make 00:03:16.523 ************************************ 00:03:16.523 17:06:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:16.523 17:06:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:16.523 17:06:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:16.783 17:06:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:16.783 17:06:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:16.783 17:06:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:16.783 17:06:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:16.783 17:06:13 -- scripts/common.sh@335 -- # IFS=.-: 00:03:16.783 17:06:13 -- scripts/common.sh@335 -- # read -ra ver1 00:03:16.783 17:06:13 -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.783 17:06:13 -- scripts/common.sh@336 -- # read -ra ver2 00:03:16.783 17:06:13 -- scripts/common.sh@337 -- # local 'op=<' 00:03:16.783 17:06:13 -- scripts/common.sh@339 -- # ver1_l=2 00:03:16.783 17:06:13 -- scripts/common.sh@340 -- # ver2_l=1 00:03:16.783 17:06:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:16.783 17:06:13 -- scripts/common.sh@343 -- # case "$op" in 00:03:16.783 17:06:13 -- scripts/common.sh@344 -- # : 1 00:03:16.783 17:06:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:16.783 17:06:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.783 17:06:13 -- scripts/common.sh@364 -- # decimal 1 00:03:16.783 17:06:13 -- scripts/common.sh@352 -- # local d=1 00:03:16.783 17:06:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.783 17:06:13 -- scripts/common.sh@354 -- # echo 1 00:03:16.783 17:06:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:16.783 17:06:13 -- scripts/common.sh@365 -- # decimal 2 00:03:16.783 17:06:13 -- scripts/common.sh@352 -- # local d=2 00:03:16.783 17:06:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.783 17:06:13 -- scripts/common.sh@354 -- # echo 2 00:03:16.783 17:06:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:16.783 17:06:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:16.783 17:06:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:16.783 17:06:13 -- scripts/common.sh@367 -- # return 0 00:03:16.783 17:06:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.783 17:06:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.783 --rc genhtml_branch_coverage=1 00:03:16.783 --rc genhtml_function_coverage=1 00:03:16.783 --rc genhtml_legend=1 00:03:16.783 --rc geninfo_all_blocks=1 00:03:16.783 --rc geninfo_unexecuted_blocks=1 00:03:16.783 00:03:16.783 ' 00:03:16.783 17:06:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.783 --rc genhtml_branch_coverage=1 00:03:16.783 --rc genhtml_function_coverage=1 00:03:16.783 --rc genhtml_legend=1 00:03:16.783 --rc geninfo_all_blocks=1 00:03:16.783 --rc geninfo_unexecuted_blocks=1 00:03:16.783 00:03:16.783 ' 00:03:16.783 17:06:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.783 --rc genhtml_branch_coverage=1 00:03:16.783 --rc genhtml_function_coverage=1 00:03:16.783 --rc genhtml_legend=1 00:03:16.783 --rc geninfo_all_blocks=1 00:03:16.783 --rc geninfo_unexecuted_blocks=1 00:03:16.783 00:03:16.783 ' 00:03:16.783 17:06:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.783 --rc genhtml_branch_coverage=1 00:03:16.783 --rc genhtml_function_coverage=1 00:03:16.783 --rc genhtml_legend=1 00:03:16.783 --rc geninfo_all_blocks=1 00:03:16.783 --rc geninfo_unexecuted_blocks=1 00:03:16.783 00:03:16.783 ' 00:03:16.783 17:06:13 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:16.783 17:06:13 -- nvmf/common.sh@7 -- # uname -s 00:03:16.783 17:06:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:16.783 17:06:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:16.783 17:06:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:16.783 17:06:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:16.783 17:06:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:16.783 17:06:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:16.783 17:06:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:16.783 17:06:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:16.783 17:06:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:16.783 17:06:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:16.783 17:06:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:16.783 17:06:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:16.783 17:06:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:16.783 17:06:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:16.783 17:06:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:16.783 17:06:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:16.783 17:06:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:16.783 17:06:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.783 17:06:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.783 17:06:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.783 17:06:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.783 17:06:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.783 17:06:13 -- paths/export.sh@5 -- # export PATH 00:03:16.783 17:06:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.783 17:06:13 -- nvmf/common.sh@46 -- # : 0 00:03:16.783 17:06:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:16.783 17:06:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:16.783 17:06:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:16.783 17:06:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:16.783 17:06:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:16.783 17:06:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:16.783 17:06:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:16.783 17:06:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:16.783 17:06:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:16.783 17:06:13 -- spdk/autotest.sh@32 -- # uname -s 00:03:16.783 17:06:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:16.783 17:06:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:16.783 17:06:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:16.783 17:06:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:16.783 17:06:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:16.783 17:06:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:16.783 17:06:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:16.783 17:06:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:16.783 17:06:13 -- spdk/autotest.sh@48 -- # udevadm_pid=1126745 00:03:16.783 17:06:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:16.783 17:06:13 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:03:16.783 17:06:13 -- spdk/autotest.sh@54 -- # echo 1126747 00:03:16.783 17:06:13 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:03:16.783 17:06:13 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:03:16.783 17:06:13 -- spdk/autotest.sh@56 -- # echo 1126748 00:03:16.783 17:06:13 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:03:16.783 17:06:13 -- spdk/autotest.sh@60 -- # echo 1126749 00:03:16.783 17:06:13 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:03:16.783 17:06:13 -- spdk/autotest.sh@62 -- # echo 1126750 00:03:16.783 17:06:13 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:03:16.783 17:06:13 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:16.783 17:06:13 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:16.783 17:06:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:16.783 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:03:16.783 17:06:13 -- spdk/autotest.sh@70 -- # create_test_list 00:03:16.783 17:06:13 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:16.783 17:06:13 -- common/autotest_common.sh@10 -- # set +x 00:03:16.783 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:03:16.783 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:03:16.784 17:06:13 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:16.784 17:06:13 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:16.784 17:06:13 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:16.784 17:06:13 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:16.784 17:06:13 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:16.784 17:06:13 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:16.784 17:06:13 -- common/autotest_common.sh@1450 -- # uname 00:03:16.784 17:06:13 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:16.784 17:06:13 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:16.784 17:06:13 -- common/autotest_common.sh@1470 -- # uname 00:03:16.784 17:06:13 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:16.784 17:06:13 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:16.784 17:06:13 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:16.784 lcov: LCOV version 1.15 00:03:16.784 17:06:13 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:19.320 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:19.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:19.320 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:19.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:19.320 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:19.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:41.265 17:06:35 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:41.265 17:06:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:41.265 17:06:35 -- common/autotest_common.sh@10 -- # set +x 00:03:41.265 17:06:35 -- spdk/autotest.sh@89 -- # rm -f 00:03:41.265 17:06:35 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.646 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:42.646 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:42.646 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:42.646 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:42.646 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:42.646 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:42.646 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:42.646 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:42.646 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:42.646 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:42.646 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:42.646 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:42.646 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:42.905 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:42.905 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:42.905 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:42.905 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:42.905 17:06:39 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:42.905 17:06:39 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:42.905 17:06:39 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:42.905 17:06:39 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:42.905 17:06:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:42.905 17:06:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:42.905 17:06:39 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:42.905 17:06:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.905 17:06:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:42.905 17:06:39 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:42.905 17:06:39 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:03:42.905 17:06:39 -- spdk/autotest.sh@108 -- # grep -v p 00:03:42.905 17:06:39 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:42.905 17:06:39 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:42.905 17:06:39 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:42.905 17:06:39 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:42.905 17:06:39 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:42.905 No valid GPT data, bailing 00:03:42.905 17:06:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.905 17:06:39 -- scripts/common.sh@393 -- # pt= 00:03:42.905 17:06:39 -- scripts/common.sh@394 -- # return 1 00:03:42.905 17:06:39 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:42.905 1+0 records in 00:03:42.905 1+0 records out 00:03:42.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00567951 s, 185 MB/s 00:03:42.905 17:06:39 -- spdk/autotest.sh@116 -- # sync 00:03:42.905 17:06:39 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:42.905 17:06:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:42.905 17:06:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:51.032 17:06:46 -- spdk/autotest.sh@122 -- # uname -s 00:03:51.032 17:06:46 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:51.032 17:06:46 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:51.032 17:06:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.032 17:06:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.032 17:06:46 -- common/autotest_common.sh@10 -- # set +x 00:03:51.032 ************************************ 00:03:51.032 START TEST setup.sh 00:03:51.032 ************************************ 00:03:51.033 17:06:46 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:51.033 * Looking for test storage... 00:03:51.033 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:51.033 17:06:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:51.033 17:06:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:51.033 17:06:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:51.033 17:06:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:51.033 17:06:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:51.033 17:06:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:51.033 17:06:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:51.033 17:06:46 -- scripts/common.sh@335 -- # IFS=.-: 00:03:51.033 17:06:46 -- scripts/common.sh@335 -- # read -ra ver1 00:03:51.033 17:06:46 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.033 17:06:46 -- scripts/common.sh@336 -- # read -ra ver2 00:03:51.033 17:06:46 -- scripts/common.sh@337 -- # local 'op=<' 00:03:51.033 17:06:46 -- scripts/common.sh@339 -- # ver1_l=2 00:03:51.033 17:06:46 -- scripts/common.sh@340 -- # ver2_l=1 00:03:51.033 17:06:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:51.033 17:06:46 -- scripts/common.sh@343 -- # case "$op" in 00:03:51.033 17:06:46 -- scripts/common.sh@344 -- # : 1 00:03:51.033 17:06:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:51.033 17:06:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.033 17:06:46 -- scripts/common.sh@364 -- # decimal 1 00:03:51.033 17:06:46 -- scripts/common.sh@352 -- # local d=1 00:03:51.033 17:06:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.033 17:06:46 -- scripts/common.sh@354 -- # echo 1 00:03:51.033 17:06:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:51.033 17:06:46 -- scripts/common.sh@365 -- # decimal 2 00:03:51.033 17:06:46 -- scripts/common.sh@352 -- # local d=2 00:03:51.033 17:06:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.033 17:06:46 -- scripts/common.sh@354 -- # echo 2 00:03:51.033 17:06:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:51.033 17:06:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:51.033 17:06:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:51.033 17:06:46 -- scripts/common.sh@367 -- # return 0 00:03:51.033 17:06:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.033 17:06:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.033 --rc genhtml_branch_coverage=1 00:03:51.033 --rc genhtml_function_coverage=1 00:03:51.033 --rc genhtml_legend=1 00:03:51.033 --rc geninfo_all_blocks=1 00:03:51.033 --rc geninfo_unexecuted_blocks=1 00:03:51.033 00:03:51.033 ' 00:03:51.033 17:06:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.033 --rc genhtml_branch_coverage=1 00:03:51.033 --rc genhtml_function_coverage=1 00:03:51.033 --rc genhtml_legend=1 00:03:51.033 --rc geninfo_all_blocks=1 00:03:51.033 --rc geninfo_unexecuted_blocks=1 00:03:51.033 00:03:51.033 ' 00:03:51.033 17:06:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.033 --rc genhtml_branch_coverage=1 00:03:51.033 --rc genhtml_function_coverage=1 00:03:51.033 --rc genhtml_legend=1 00:03:51.033 --rc geninfo_all_blocks=1 00:03:51.033 --rc geninfo_unexecuted_blocks=1 00:03:51.033 00:03:51.033 ' 00:03:51.033 17:06:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.033 --rc genhtml_branch_coverage=1 00:03:51.033 --rc genhtml_function_coverage=1 00:03:51.033 --rc genhtml_legend=1 00:03:51.033 --rc geninfo_all_blocks=1 00:03:51.033 --rc geninfo_unexecuted_blocks=1 00:03:51.033 00:03:51.033 ' 00:03:51.033 17:06:46 -- setup/test-setup.sh@10 -- # uname -s 00:03:51.033 17:06:46 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:51.033 17:06:46 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:51.033 17:06:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.033 17:06:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.033 17:06:46 -- common/autotest_common.sh@10 -- # set +x 00:03:51.033 ************************************ 00:03:51.033 START TEST acl 00:03:51.033 ************************************ 00:03:51.033 17:06:46 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:51.033 * Looking for test storage... 00:03:51.033 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:51.033 17:06:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:51.033 17:06:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:51.033 17:06:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:51.033 17:06:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:51.033 17:06:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:51.033 17:06:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:51.033 17:06:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:51.033 17:06:47 -- scripts/common.sh@335 -- # IFS=.-: 00:03:51.033 17:06:47 -- scripts/common.sh@335 -- # read -ra ver1 00:03:51.033 17:06:47 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.033 17:06:47 -- scripts/common.sh@336 -- # read -ra ver2 00:03:51.033 17:06:47 -- scripts/common.sh@337 -- # local 'op=<' 00:03:51.033 17:06:47 -- scripts/common.sh@339 -- # ver1_l=2 00:03:51.033 17:06:47 -- scripts/common.sh@340 -- # ver2_l=1 00:03:51.033 17:06:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:51.033 17:06:47 -- scripts/common.sh@343 -- # case "$op" in 00:03:51.033 17:06:47 -- scripts/common.sh@344 -- # : 1 00:03:51.033 17:06:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:51.033 17:06:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.033 17:06:47 -- scripts/common.sh@364 -- # decimal 1 00:03:51.033 17:06:47 -- scripts/common.sh@352 -- # local d=1 00:03:51.033 17:06:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.033 17:06:47 -- scripts/common.sh@354 -- # echo 1 00:03:51.033 17:06:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:51.033 17:06:47 -- scripts/common.sh@365 -- # decimal 2 00:03:51.033 17:06:47 -- scripts/common.sh@352 -- # local d=2 00:03:51.033 17:06:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.033 17:06:47 -- scripts/common.sh@354 -- # echo 2 00:03:51.033 17:06:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:51.033 17:06:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:51.033 17:06:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:51.033 17:06:47 -- scripts/common.sh@367 -- # return 0 00:03:51.033 17:06:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.033 17:06:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.033 --rc genhtml_branch_coverage=1 00:03:51.033 --rc genhtml_function_coverage=1 00:03:51.033 --rc genhtml_legend=1 00:03:51.033 --rc geninfo_all_blocks=1 00:03:51.033 --rc geninfo_unexecuted_blocks=1 00:03:51.033 00:03:51.033 ' 00:03:51.033 17:06:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.033 --rc genhtml_branch_coverage=1 00:03:51.033 --rc genhtml_function_coverage=1 00:03:51.033 --rc genhtml_legend=1 00:03:51.033 --rc geninfo_all_blocks=1 00:03:51.033 --rc geninfo_unexecuted_blocks=1 00:03:51.033 00:03:51.033 ' 00:03:51.033 17:06:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.033 --rc genhtml_branch_coverage=1 00:03:51.033 --rc genhtml_function_coverage=1 00:03:51.033 --rc genhtml_legend=1 00:03:51.033 --rc geninfo_all_blocks=1 00:03:51.033 --rc geninfo_unexecuted_blocks=1 00:03:51.033 00:03:51.033 ' 00:03:51.033 17:06:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.033 --rc genhtml_branch_coverage=1 00:03:51.033 --rc genhtml_function_coverage=1 00:03:51.033 --rc genhtml_legend=1 00:03:51.033 --rc geninfo_all_blocks=1 00:03:51.033 --rc geninfo_unexecuted_blocks=1 00:03:51.033 00:03:51.033 ' 00:03:51.033 17:06:47 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:51.033 17:06:47 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:51.033 17:06:47 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:51.033 17:06:47 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:51.033 17:06:47 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:51.033 17:06:47 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:51.033 17:06:47 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:51.033 17:06:47 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.033 17:06:47 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:51.033 17:06:47 -- setup/acl.sh@12 -- # devs=() 00:03:51.033 17:06:47 -- setup/acl.sh@12 -- # declare -a devs 00:03:51.033 17:06:47 -- setup/acl.sh@13 -- # drivers=() 00:03:51.033 17:06:47 -- setup/acl.sh@13 -- # declare -A drivers 00:03:51.033 17:06:47 -- setup/acl.sh@51 -- # setup reset 00:03:51.033 17:06:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.033 17:06:47 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.229 17:06:51 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:55.229 17:06:51 -- setup/acl.sh@16 -- # local dev driver 00:03:55.229 17:06:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.229 17:06:51 -- setup/acl.sh@15 -- # setup output status 00:03:55.229 17:06:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.229 17:06:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:57.767 Hugepages 00:03:57.767 node hugesize free / total 00:03:57.767 17:06:54 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:57.767 17:06:54 -- setup/acl.sh@19 -- # continue 00:03:57.767 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.767 17:06:54 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:57.767 17:06:54 -- setup/acl.sh@19 -- # continue 00:03:57.767 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.767 17:06:54 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:57.767 17:06:54 -- setup/acl.sh@19 -- # continue 00:03:57.767 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.767 00:03:57.767 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.767 17:06:54 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:57.767 17:06:54 -- setup/acl.sh@19 -- # continue 00:03:57.767 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.767 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:57.767 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.767 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:57.767 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.767 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:57.767 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.767 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:57.767 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.767 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:57.767 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:57.767 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:57.767 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # continue 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.027 17:06:54 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:58.027 17:06:54 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.027 17:06:54 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.027 17:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.027 17:06:54 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:58.027 17:06:54 -- setup/acl.sh@54 -- # run_test denied denied 00:03:58.027 17:06:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.027 17:06:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.027 17:06:54 -- common/autotest_common.sh@10 -- # set +x 00:03:58.028 ************************************ 00:03:58.028 START TEST denied 00:03:58.028 ************************************ 00:03:58.028 17:06:54 -- common/autotest_common.sh@1114 -- # denied 00:03:58.028 17:06:54 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:58.028 17:06:54 -- setup/acl.sh@38 -- # setup output config 00:03:58.028 17:06:54 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:58.028 17:06:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.028 17:06:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:02.220 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:04:02.220 17:06:58 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:04:02.220 17:06:58 -- setup/acl.sh@28 -- # local dev driver 00:04:02.220 17:06:58 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:02.220 17:06:58 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:04:02.220 17:06:58 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:04:02.220 17:06:58 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:02.220 17:06:58 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:02.220 17:06:58 -- setup/acl.sh@41 -- # setup reset 00:04:02.220 17:06:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.220 17:06:58 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.596 00:04:07.596 real 0m8.663s 00:04:07.596 user 0m2.687s 00:04:07.596 sys 0m5.294s 00:04:07.596 17:07:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.596 17:07:03 -- common/autotest_common.sh@10 -- # set +x 00:04:07.596 ************************************ 00:04:07.596 END TEST denied 00:04:07.596 ************************************ 00:04:07.596 17:07:03 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:07.596 17:07:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.596 17:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.596 17:07:03 -- common/autotest_common.sh@10 -- # set +x 00:04:07.596 ************************************ 00:04:07.596 START TEST allowed 00:04:07.596 ************************************ 00:04:07.596 17:07:03 -- common/autotest_common.sh@1114 -- # allowed 00:04:07.596 17:07:03 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:04:07.596 17:07:03 -- setup/acl.sh@45 -- # setup output config 00:04:07.596 17:07:03 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:04:07.596 17:07:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.596 17:07:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:12.872 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:12.872 17:07:08 -- setup/acl.sh@47 -- # verify 00:04:12.872 17:07:08 -- setup/acl.sh@28 -- # local dev driver 00:04:12.872 17:07:08 -- setup/acl.sh@48 -- # setup reset 00:04:12.872 17:07:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.872 17:07:08 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.066 00:04:17.066 real 0m9.675s 00:04:17.066 user 0m2.686s 00:04:17.066 sys 0m5.201s 00:04:17.066 17:07:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.066 17:07:13 -- common/autotest_common.sh@10 -- # set +x 00:04:17.066 ************************************ 00:04:17.066 END TEST allowed 00:04:17.066 ************************************ 00:04:17.066 00:04:17.066 real 0m26.086s 00:04:17.066 user 0m8.164s 00:04:17.066 sys 0m15.771s 00:04:17.066 17:07:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.066 17:07:13 -- common/autotest_common.sh@10 -- # set +x 00:04:17.066 ************************************ 00:04:17.066 END TEST acl 00:04:17.066 ************************************ 00:04:17.066 17:07:13 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:17.066 17:07:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.066 17:07:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.066 17:07:13 -- common/autotest_common.sh@10 -- # set +x 00:04:17.066 ************************************ 00:04:17.066 START TEST hugepages 00:04:17.066 ************************************ 00:04:17.066 17:07:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:17.067 * Looking for test storage... 00:04:17.067 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:17.067 17:07:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:17.067 17:07:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:17.067 17:07:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:17.067 17:07:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:17.067 17:07:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:17.067 17:07:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:17.067 17:07:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:17.067 17:07:13 -- scripts/common.sh@335 -- # IFS=.-: 00:04:17.067 17:07:13 -- scripts/common.sh@335 -- # read -ra ver1 00:04:17.067 17:07:13 -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.067 17:07:13 -- scripts/common.sh@336 -- # read -ra ver2 00:04:17.067 17:07:13 -- scripts/common.sh@337 -- # local 'op=<' 00:04:17.067 17:07:13 -- scripts/common.sh@339 -- # ver1_l=2 00:04:17.067 17:07:13 -- scripts/common.sh@340 -- # ver2_l=1 00:04:17.067 17:07:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:17.067 17:07:13 -- scripts/common.sh@343 -- # case "$op" in 00:04:17.067 17:07:13 -- scripts/common.sh@344 -- # : 1 00:04:17.067 17:07:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:17.067 17:07:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.067 17:07:13 -- scripts/common.sh@364 -- # decimal 1 00:04:17.067 17:07:13 -- scripts/common.sh@352 -- # local d=1 00:04:17.067 17:07:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.067 17:07:13 -- scripts/common.sh@354 -- # echo 1 00:04:17.067 17:07:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:17.067 17:07:13 -- scripts/common.sh@365 -- # decimal 2 00:04:17.067 17:07:13 -- scripts/common.sh@352 -- # local d=2 00:04:17.067 17:07:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.067 17:07:13 -- scripts/common.sh@354 -- # echo 2 00:04:17.067 17:07:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:17.067 17:07:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:17.067 17:07:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:17.067 17:07:13 -- scripts/common.sh@367 -- # return 0 00:04:17.067 17:07:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.067 17:07:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:17.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.067 --rc genhtml_branch_coverage=1 00:04:17.067 --rc genhtml_function_coverage=1 00:04:17.067 --rc genhtml_legend=1 00:04:17.067 --rc geninfo_all_blocks=1 00:04:17.067 --rc geninfo_unexecuted_blocks=1 00:04:17.067 00:04:17.067 ' 00:04:17.067 17:07:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:17.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.067 --rc genhtml_branch_coverage=1 00:04:17.067 --rc genhtml_function_coverage=1 00:04:17.067 --rc genhtml_legend=1 00:04:17.067 --rc geninfo_all_blocks=1 00:04:17.067 --rc geninfo_unexecuted_blocks=1 00:04:17.067 00:04:17.067 ' 00:04:17.067 17:07:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:17.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.067 --rc genhtml_branch_coverage=1 00:04:17.067 --rc genhtml_function_coverage=1 00:04:17.067 --rc genhtml_legend=1 00:04:17.067 --rc geninfo_all_blocks=1 00:04:17.067 --rc geninfo_unexecuted_blocks=1 00:04:17.067 00:04:17.067 ' 00:04:17.067 17:07:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:17.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.067 --rc genhtml_branch_coverage=1 00:04:17.067 --rc genhtml_function_coverage=1 00:04:17.067 --rc genhtml_legend=1 00:04:17.067 --rc geninfo_all_blocks=1 00:04:17.067 --rc geninfo_unexecuted_blocks=1 00:04:17.067 00:04:17.067 ' 00:04:17.067 17:07:13 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:17.067 17:07:13 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:17.067 17:07:13 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:17.067 17:07:13 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:17.067 17:07:13 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:17.067 17:07:13 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:17.067 17:07:13 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:17.067 17:07:13 -- setup/common.sh@18 -- # local node= 00:04:17.067 17:07:13 -- setup/common.sh@19 -- # local var val 00:04:17.067 17:07:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.067 17:07:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.067 17:07:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.067 17:07:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.067 17:07:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.067 17:07:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 41221652 kB' 'MemAvailable: 44946208 kB' 'Buffers: 4100 kB' 'Cached: 10760216 kB' 'SwapCached: 0 kB' 'Active: 7515352 kB' 'Inactive: 3692704 kB' 'Active(anon): 7126912 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447212 kB' 'Mapped: 181976 kB' 'Shmem: 6683172 kB' 'KReclaimable: 281004 kB' 'Slab: 1038284 kB' 'SReclaimable: 281004 kB' 'SUnreclaim: 757280 kB' 'KernelStack: 21904 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433348 kB' 'Committed_AS: 8302524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217676 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.067 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.067 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # continue 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.068 17:07:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.068 17:07:13 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:17.068 17:07:13 -- setup/common.sh@33 -- # echo 2048 00:04:17.068 17:07:13 -- setup/common.sh@33 -- # return 0 00:04:17.068 17:07:13 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:17.068 17:07:13 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:17.068 17:07:13 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:17.068 17:07:13 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:17.068 17:07:13 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:17.068 17:07:13 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:17.068 17:07:13 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:17.068 17:07:13 -- setup/hugepages.sh@207 -- # get_nodes 00:04:17.068 17:07:13 -- setup/hugepages.sh@27 -- # local node 00:04:17.068 17:07:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.068 17:07:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:17.068 17:07:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.068 17:07:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:17.068 17:07:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.068 17:07:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.068 17:07:13 -- setup/hugepages.sh@208 -- # clear_hp 00:04:17.068 17:07:13 -- setup/hugepages.sh@37 -- # local node hp 00:04:17.068 17:07:13 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:17.068 17:07:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.068 17:07:13 -- setup/hugepages.sh@41 -- # echo 0 00:04:17.068 17:07:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.068 17:07:13 -- setup/hugepages.sh@41 -- # echo 0 00:04:17.068 17:07:13 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:17.068 17:07:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.069 17:07:13 -- setup/hugepages.sh@41 -- # echo 0 00:04:17.069 17:07:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:17.069 17:07:13 -- setup/hugepages.sh@41 -- # echo 0 00:04:17.069 17:07:13 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:17.069 17:07:13 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:17.069 17:07:13 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:17.069 17:07:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.069 17:07:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.069 17:07:13 -- common/autotest_common.sh@10 -- # set +x 00:04:17.069 ************************************ 00:04:17.069 START TEST default_setup 00:04:17.069 ************************************ 00:04:17.069 17:07:13 -- common/autotest_common.sh@1114 -- # default_setup 00:04:17.069 17:07:13 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:17.069 17:07:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.069 17:07:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:17.069 17:07:13 -- setup/hugepages.sh@51 -- # shift 00:04:17.069 17:07:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:17.069 17:07:13 -- setup/hugepages.sh@52 -- # local node_ids 00:04:17.069 17:07:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.069 17:07:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.069 17:07:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:17.069 17:07:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:17.069 17:07:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.069 17:07:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.069 17:07:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.069 17:07:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.069 17:07:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.069 17:07:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:17.069 17:07:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:17.069 17:07:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:17.069 17:07:13 -- setup/hugepages.sh@73 -- # return 0 00:04:17.069 17:07:13 -- setup/hugepages.sh@137 -- # setup output 00:04:17.069 17:07:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.069 17:07:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:20.360 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:20.360 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:20.619 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:20.619 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:22.527 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:22.527 17:07:19 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:22.527 17:07:19 -- setup/hugepages.sh@89 -- # local node 00:04:22.527 17:07:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.527 17:07:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.527 17:07:19 -- setup/hugepages.sh@92 -- # local surp 00:04:22.527 17:07:19 -- setup/hugepages.sh@93 -- # local resv 00:04:22.527 17:07:19 -- setup/hugepages.sh@94 -- # local anon 00:04:22.527 17:07:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.527 17:07:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.527 17:07:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.527 17:07:19 -- setup/common.sh@18 -- # local node= 00:04:22.527 17:07:19 -- setup/common.sh@19 -- # local var val 00:04:22.527 17:07:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.527 17:07:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.527 17:07:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.527 17:07:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.527 17:07:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.527 17:07:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.527 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.527 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.527 17:07:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43387664 kB' 'MemAvailable: 47112012 kB' 'Buffers: 4100 kB' 'Cached: 10760356 kB' 'SwapCached: 0 kB' 'Active: 7517348 kB' 'Inactive: 3692704 kB' 'Active(anon): 7128908 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448916 kB' 'Mapped: 182244 kB' 'Shmem: 6683312 kB' 'KReclaimable: 280588 kB' 'Slab: 1036852 kB' 'SReclaimable: 280588 kB' 'SUnreclaim: 756264 kB' 'KernelStack: 22048 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8305360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217916 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:22.527 17:07:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.527 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.527 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.527 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.527 17:07:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.527 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.528 17:07:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.528 17:07:19 -- setup/common.sh@33 -- # echo 0 00:04:22.528 17:07:19 -- setup/common.sh@33 -- # return 0 00:04:22.528 17:07:19 -- setup/hugepages.sh@97 -- # anon=0 00:04:22.528 17:07:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.528 17:07:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.528 17:07:19 -- setup/common.sh@18 -- # local node= 00:04:22.528 17:07:19 -- setup/common.sh@19 -- # local var val 00:04:22.528 17:07:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.528 17:07:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.528 17:07:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.528 17:07:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.528 17:07:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.528 17:07:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.528 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43397644 kB' 'MemAvailable: 47121984 kB' 'Buffers: 4100 kB' 'Cached: 10760360 kB' 'SwapCached: 0 kB' 'Active: 7517284 kB' 'Inactive: 3692704 kB' 'Active(anon): 7128844 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448900 kB' 'Mapped: 182160 kB' 'Shmem: 6683316 kB' 'KReclaimable: 280572 kB' 'Slab: 1036844 kB' 'SReclaimable: 280572 kB' 'SUnreclaim: 756272 kB' 'KernelStack: 21984 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8305372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217852 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.529 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.529 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.530 17:07:19 -- setup/common.sh@33 -- # echo 0 00:04:22.530 17:07:19 -- setup/common.sh@33 -- # return 0 00:04:22.530 17:07:19 -- setup/hugepages.sh@99 -- # surp=0 00:04:22.530 17:07:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.530 17:07:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.530 17:07:19 -- setup/common.sh@18 -- # local node= 00:04:22.530 17:07:19 -- setup/common.sh@19 -- # local var val 00:04:22.530 17:07:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.530 17:07:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.530 17:07:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.530 17:07:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.530 17:07:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.530 17:07:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43397368 kB' 'MemAvailable: 47121708 kB' 'Buffers: 4100 kB' 'Cached: 10760372 kB' 'SwapCached: 0 kB' 'Active: 7517292 kB' 'Inactive: 3692704 kB' 'Active(anon): 7128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448860 kB' 'Mapped: 182160 kB' 'Shmem: 6683328 kB' 'KReclaimable: 280572 kB' 'Slab: 1036844 kB' 'SReclaimable: 280572 kB' 'SUnreclaim: 756272 kB' 'KernelStack: 22016 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8305388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217916 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.530 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.530 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.792 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.792 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.793 17:07:19 -- setup/common.sh@33 -- # echo 0 00:04:22.793 17:07:19 -- setup/common.sh@33 -- # return 0 00:04:22.793 17:07:19 -- setup/hugepages.sh@100 -- # resv=0 00:04:22.793 17:07:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.793 nr_hugepages=1024 00:04:22.793 17:07:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.793 resv_hugepages=0 00:04:22.793 17:07:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.793 surplus_hugepages=0 00:04:22.793 17:07:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.793 anon_hugepages=0 00:04:22.793 17:07:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.793 17:07:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.793 17:07:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.793 17:07:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.793 17:07:19 -- setup/common.sh@18 -- # local node= 00:04:22.793 17:07:19 -- setup/common.sh@19 -- # local var val 00:04:22.793 17:07:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.793 17:07:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.793 17:07:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.793 17:07:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.793 17:07:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.793 17:07:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43396240 kB' 'MemAvailable: 47120580 kB' 'Buffers: 4100 kB' 'Cached: 10760384 kB' 'SwapCached: 0 kB' 'Active: 7517224 kB' 'Inactive: 3692704 kB' 'Active(anon): 7128784 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448812 kB' 'Mapped: 182160 kB' 'Shmem: 6683340 kB' 'KReclaimable: 280572 kB' 'Slab: 1036748 kB' 'SReclaimable: 280572 kB' 'SUnreclaim: 756176 kB' 'KernelStack: 22048 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8305160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217932 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.793 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.793 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.794 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.794 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.794 17:07:19 -- setup/common.sh@33 -- # echo 1024 00:04:22.794 17:07:19 -- setup/common.sh@33 -- # return 0 00:04:22.794 17:07:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.794 17:07:19 -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.794 17:07:19 -- setup/hugepages.sh@27 -- # local node 00:04:22.794 17:07:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.794 17:07:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.794 17:07:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.794 17:07:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:22.794 17:07:19 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.794 17:07:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.794 17:07:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.794 17:07:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.794 17:07:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.794 17:07:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.794 17:07:19 -- setup/common.sh@18 -- # local node=0 00:04:22.794 17:07:19 -- setup/common.sh@19 -- # local var val 00:04:22.794 17:07:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.794 17:07:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.795 17:07:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.795 17:07:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.795 17:07:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.795 17:07:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 26856992 kB' 'MemUsed: 5777444 kB' 'SwapCached: 0 kB' 'Active: 2065576 kB' 'Inactive: 107564 kB' 'Active(anon): 1860812 kB' 'Inactive(anon): 0 kB' 'Active(file): 204764 kB' 'Inactive(file): 107564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1855728 kB' 'Mapped: 147460 kB' 'AnonPages: 320620 kB' 'Shmem: 1543400 kB' 'KernelStack: 11192 kB' 'PageTables: 4824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113008 kB' 'Slab: 437300 kB' 'SReclaimable: 113008 kB' 'SUnreclaim: 324292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # continue 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.795 17:07:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.795 17:07:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.795 17:07:19 -- setup/common.sh@33 -- # echo 0 00:04:22.795 17:07:19 -- setup/common.sh@33 -- # return 0 00:04:22.795 17:07:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.795 17:07:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.796 17:07:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.796 17:07:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.796 17:07:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.796 node0=1024 expecting 1024 00:04:22.796 17:07:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.796 00:04:22.796 real 0m5.899s 00:04:22.796 user 0m1.459s 00:04:22.796 sys 0m2.558s 00:04:22.796 17:07:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:22.796 17:07:19 -- common/autotest_common.sh@10 -- # set +x 00:04:22.796 ************************************ 00:04:22.796 END TEST default_setup 00:04:22.796 ************************************ 00:04:22.796 17:07:19 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:22.796 17:07:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.796 17:07:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.796 17:07:19 -- common/autotest_common.sh@10 -- # set +x 00:04:22.796 ************************************ 00:04:22.796 START TEST per_node_1G_alloc 00:04:22.796 ************************************ 00:04:22.796 17:07:19 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:22.796 17:07:19 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:22.796 17:07:19 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:22.796 17:07:19 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:22.796 17:07:19 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:22.796 17:07:19 -- setup/hugepages.sh@51 -- # shift 00:04:22.796 17:07:19 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:22.796 17:07:19 -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.796 17:07:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.796 17:07:19 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:22.796 17:07:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:22.796 17:07:19 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:22.796 17:07:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.796 17:07:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:22.796 17:07:19 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.796 17:07:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.796 17:07:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.796 17:07:19 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:22.796 17:07:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.796 17:07:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:22.796 17:07:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.796 17:07:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:22.796 17:07:19 -- setup/hugepages.sh@73 -- # return 0 00:04:22.796 17:07:19 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:22.796 17:07:19 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:22.796 17:07:19 -- setup/hugepages.sh@146 -- # setup output 00:04:22.796 17:07:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.796 17:07:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:26.090 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.090 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.090 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.090 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.090 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.353 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.353 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.353 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.353 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.353 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.353 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.353 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.353 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.353 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.353 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.353 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.353 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:26.353 17:07:22 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:26.353 17:07:22 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:26.353 17:07:22 -- setup/hugepages.sh@89 -- # local node 00:04:26.353 17:07:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.353 17:07:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.353 17:07:22 -- setup/hugepages.sh@92 -- # local surp 00:04:26.353 17:07:22 -- setup/hugepages.sh@93 -- # local resv 00:04:26.353 17:07:22 -- setup/hugepages.sh@94 -- # local anon 00:04:26.353 17:07:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.353 17:07:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.353 17:07:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.353 17:07:22 -- setup/common.sh@18 -- # local node= 00:04:26.353 17:07:22 -- setup/common.sh@19 -- # local var val 00:04:26.353 17:07:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.353 17:07:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.353 17:07:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.353 17:07:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.353 17:07:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.353 17:07:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.353 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.353 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43386184 kB' 'MemAvailable: 47110500 kB' 'Buffers: 4100 kB' 'Cached: 10760480 kB' 'SwapCached: 0 kB' 'Active: 7515816 kB' 'Inactive: 3692704 kB' 'Active(anon): 7127376 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447120 kB' 'Mapped: 181120 kB' 'Shmem: 6683436 kB' 'KReclaimable: 280524 kB' 'Slab: 1036748 kB' 'SReclaimable: 280524 kB' 'SUnreclaim: 756224 kB' 'KernelStack: 21904 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8293536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.354 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.354 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.355 17:07:22 -- setup/common.sh@33 -- # echo 0 00:04:26.355 17:07:22 -- setup/common.sh@33 -- # return 0 00:04:26.355 17:07:22 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.355 17:07:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.355 17:07:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.355 17:07:22 -- setup/common.sh@18 -- # local node= 00:04:26.355 17:07:22 -- setup/common.sh@19 -- # local var val 00:04:26.355 17:07:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.355 17:07:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.355 17:07:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.355 17:07:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.355 17:07:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.355 17:07:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43391004 kB' 'MemAvailable: 47115320 kB' 'Buffers: 4100 kB' 'Cached: 10760480 kB' 'SwapCached: 0 kB' 'Active: 7516056 kB' 'Inactive: 3692704 kB' 'Active(anon): 7127616 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447416 kB' 'Mapped: 181128 kB' 'Shmem: 6683436 kB' 'KReclaimable: 280524 kB' 'Slab: 1036792 kB' 'SReclaimable: 280524 kB' 'SUnreclaim: 756268 kB' 'KernelStack: 21872 kB' 'PageTables: 7456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8306584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217868 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.355 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.355 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.356 17:07:22 -- setup/common.sh@33 -- # echo 0 00:04:26.356 17:07:22 -- setup/common.sh@33 -- # return 0 00:04:26.356 17:07:22 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.356 17:07:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.356 17:07:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.356 17:07:22 -- setup/common.sh@18 -- # local node= 00:04:26.356 17:07:22 -- setup/common.sh@19 -- # local var val 00:04:26.356 17:07:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.356 17:07:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.356 17:07:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.356 17:07:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.356 17:07:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.356 17:07:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43392344 kB' 'MemAvailable: 47116660 kB' 'Buffers: 4100 kB' 'Cached: 10760480 kB' 'SwapCached: 0 kB' 'Active: 7515816 kB' 'Inactive: 3692704 kB' 'Active(anon): 7127376 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447160 kB' 'Mapped: 181112 kB' 'Shmem: 6683436 kB' 'KReclaimable: 280524 kB' 'Slab: 1036788 kB' 'SReclaimable: 280524 kB' 'SUnreclaim: 756264 kB' 'KernelStack: 21904 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8306756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217868 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.356 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.356 17:07:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:22 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.357 17:07:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.357 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.357 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.357 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.358 17:07:23 -- setup/common.sh@33 -- # echo 0 00:04:26.358 17:07:23 -- setup/common.sh@33 -- # return 0 00:04:26.358 17:07:23 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.358 17:07:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.358 nr_hugepages=1024 00:04:26.358 17:07:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.358 resv_hugepages=0 00:04:26.358 17:07:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.358 surplus_hugepages=0 00:04:26.358 17:07:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.358 anon_hugepages=0 00:04:26.358 17:07:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.358 17:07:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.358 17:07:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.358 17:07:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.358 17:07:23 -- setup/common.sh@18 -- # local node= 00:04:26.358 17:07:23 -- setup/common.sh@19 -- # local var val 00:04:26.358 17:07:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.358 17:07:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.358 17:07:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.358 17:07:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.358 17:07:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.358 17:07:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43392120 kB' 'MemAvailable: 47116436 kB' 'Buffers: 4100 kB' 'Cached: 10760480 kB' 'SwapCached: 0 kB' 'Active: 7515944 kB' 'Inactive: 3692704 kB' 'Active(anon): 7127504 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447296 kB' 'Mapped: 181112 kB' 'Shmem: 6683436 kB' 'KReclaimable: 280524 kB' 'Slab: 1036820 kB' 'SReclaimable: 280524 kB' 'SUnreclaim: 756296 kB' 'KernelStack: 21904 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8293208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217836 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.358 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.358 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.359 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.359 17:07:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.620 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.620 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 17:07:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.620 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.620 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 17:07:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.621 17:07:23 -- setup/common.sh@33 -- # echo 1024 00:04:26.621 17:07:23 -- setup/common.sh@33 -- # return 0 00:04:26.621 17:07:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.621 17:07:23 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.621 17:07:23 -- setup/hugepages.sh@27 -- # local node 00:04:26.621 17:07:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.621 17:07:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.621 17:07:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.621 17:07:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.621 17:07:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.621 17:07:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.621 17:07:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.621 17:07:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.621 17:07:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.621 17:07:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.621 17:07:23 -- setup/common.sh@18 -- # local node=0 00:04:26.621 17:07:23 -- setup/common.sh@19 -- # local var val 00:04:26.621 17:07:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.621 17:07:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.621 17:07:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.621 17:07:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.621 17:07:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.621 17:07:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 27939112 kB' 'MemUsed: 4695324 kB' 'SwapCached: 0 kB' 'Active: 2065248 kB' 'Inactive: 107564 kB' 'Active(anon): 1860484 kB' 'Inactive(anon): 0 kB' 'Active(file): 204764 kB' 'Inactive(file): 107564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1855832 kB' 'Mapped: 146608 kB' 'AnonPages: 320180 kB' 'Shmem: 1543504 kB' 'KernelStack: 11096 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113008 kB' 'Slab: 437364 kB' 'SReclaimable: 113008 kB' 'SUnreclaim: 324356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@33 -- # echo 0 00:04:26.622 17:07:23 -- setup/common.sh@33 -- # return 0 00:04:26.622 17:07:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.622 17:07:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.622 17:07:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.622 17:07:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:26.622 17:07:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.622 17:07:23 -- setup/common.sh@18 -- # local node=1 00:04:26.622 17:07:23 -- setup/common.sh@19 -- # local var val 00:04:26.622 17:07:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.622 17:07:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.622 17:07:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:26.622 17:07:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:26.622 17:07:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.622 17:07:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 15452492 kB' 'MemUsed: 12196868 kB' 'SwapCached: 0 kB' 'Active: 5449792 kB' 'Inactive: 3585140 kB' 'Active(anon): 5266116 kB' 'Inactive(anon): 0 kB' 'Active(file): 183676 kB' 'Inactive(file): 3585140 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8908808 kB' 'Mapped: 34504 kB' 'AnonPages: 126160 kB' 'Shmem: 5139992 kB' 'KernelStack: 10760 kB' 'PageTables: 2728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167516 kB' 'Slab: 599440 kB' 'SReclaimable: 167516 kB' 'SUnreclaim: 431924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.622 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.622 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # continue 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.623 17:07:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.623 17:07:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.623 17:07:23 -- setup/common.sh@33 -- # echo 0 00:04:26.623 17:07:23 -- setup/common.sh@33 -- # return 0 00:04:26.623 17:07:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.623 17:07:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.623 17:07:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.623 17:07:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.623 17:07:23 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:26.623 node0=512 expecting 512 00:04:26.623 17:07:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.623 17:07:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.623 17:07:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.623 17:07:23 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:26.623 node1=512 expecting 512 00:04:26.623 17:07:23 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:26.623 00:04:26.623 real 0m3.766s 00:04:26.623 user 0m1.459s 00:04:26.623 sys 0m2.378s 00:04:26.623 17:07:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.623 17:07:23 -- common/autotest_common.sh@10 -- # set +x 00:04:26.623 ************************************ 00:04:26.623 END TEST per_node_1G_alloc 00:04:26.623 ************************************ 00:04:26.623 17:07:23 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:26.623 17:07:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.623 17:07:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.623 17:07:23 -- common/autotest_common.sh@10 -- # set +x 00:04:26.623 ************************************ 00:04:26.623 START TEST even_2G_alloc 00:04:26.623 ************************************ 00:04:26.623 17:07:23 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:26.623 17:07:23 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:26.623 17:07:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:26.623 17:07:23 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.623 17:07:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.623 17:07:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:26.623 17:07:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.623 17:07:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.623 17:07:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.623 17:07:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:26.623 17:07:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.623 17:07:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.623 17:07:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.623 17:07:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.623 17:07:23 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:26.623 17:07:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.623 17:07:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:26.623 17:07:23 -- setup/hugepages.sh@83 -- # : 512 00:04:26.623 17:07:23 -- setup/hugepages.sh@84 -- # : 1 00:04:26.623 17:07:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.623 17:07:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:26.623 17:07:23 -- setup/hugepages.sh@83 -- # : 0 00:04:26.623 17:07:23 -- setup/hugepages.sh@84 -- # : 0 00:04:26.623 17:07:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.623 17:07:23 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:26.623 17:07:23 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:26.623 17:07:23 -- setup/hugepages.sh@153 -- # setup output 00:04:26.623 17:07:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.623 17:07:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:29.914 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.914 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.914 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.914 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.914 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.914 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:30.176 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:30.176 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:30.176 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:30.176 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:30.176 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:30.176 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:30.176 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:30.176 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:30.176 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:30.176 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:30.176 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:30.176 17:07:26 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:30.176 17:07:26 -- setup/hugepages.sh@89 -- # local node 00:04:30.176 17:07:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.176 17:07:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.176 17:07:26 -- setup/hugepages.sh@92 -- # local surp 00:04:30.176 17:07:26 -- setup/hugepages.sh@93 -- # local resv 00:04:30.176 17:07:26 -- setup/hugepages.sh@94 -- # local anon 00:04:30.176 17:07:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.176 17:07:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.176 17:07:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.176 17:07:26 -- setup/common.sh@18 -- # local node= 00:04:30.176 17:07:26 -- setup/common.sh@19 -- # local var val 00:04:30.176 17:07:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.176 17:07:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.176 17:07:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.176 17:07:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.176 17:07:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.176 17:07:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.176 17:07:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43429056 kB' 'MemAvailable: 47153372 kB' 'Buffers: 4100 kB' 'Cached: 10760616 kB' 'SwapCached: 0 kB' 'Active: 7517552 kB' 'Inactive: 3692704 kB' 'Active(anon): 7129112 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448344 kB' 'Mapped: 181136 kB' 'Shmem: 6683572 kB' 'KReclaimable: 280524 kB' 'Slab: 1037244 kB' 'SReclaimable: 280524 kB' 'SUnreclaim: 756720 kB' 'KernelStack: 21872 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8294636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217852 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.176 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.176 17:07:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.177 17:07:26 -- setup/common.sh@33 -- # echo 0 00:04:30.177 17:07:26 -- setup/common.sh@33 -- # return 0 00:04:30.177 17:07:26 -- setup/hugepages.sh@97 -- # anon=0 00:04:30.177 17:07:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.177 17:07:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.177 17:07:26 -- setup/common.sh@18 -- # local node= 00:04:30.177 17:07:26 -- setup/common.sh@19 -- # local var val 00:04:30.177 17:07:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.177 17:07:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.177 17:07:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.177 17:07:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.177 17:07:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.177 17:07:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43430820 kB' 'MemAvailable: 47155136 kB' 'Buffers: 4100 kB' 'Cached: 10760620 kB' 'SwapCached: 0 kB' 'Active: 7516784 kB' 'Inactive: 3692704 kB' 'Active(anon): 7128344 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448068 kB' 'Mapped: 181132 kB' 'Shmem: 6683576 kB' 'KReclaimable: 280524 kB' 'Slab: 1037292 kB' 'SReclaimable: 280524 kB' 'SUnreclaim: 756768 kB' 'KernelStack: 21888 kB' 'PageTables: 7452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8294648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217836 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.177 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.177 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.178 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.178 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.179 17:07:26 -- setup/common.sh@33 -- # echo 0 00:04:30.179 17:07:26 -- setup/common.sh@33 -- # return 0 00:04:30.179 17:07:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:30.179 17:07:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.179 17:07:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.179 17:07:26 -- setup/common.sh@18 -- # local node= 00:04:30.179 17:07:26 -- setup/common.sh@19 -- # local var val 00:04:30.179 17:07:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.179 17:07:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.179 17:07:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.179 17:07:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.179 17:07:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.179 17:07:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.179 17:07:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43431336 kB' 'MemAvailable: 47155652 kB' 'Buffers: 4100 kB' 'Cached: 10760628 kB' 'SwapCached: 0 kB' 'Active: 7516936 kB' 'Inactive: 3692704 kB' 'Active(anon): 7128496 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448248 kB' 'Mapped: 181132 kB' 'Shmem: 6683584 kB' 'KReclaimable: 280524 kB' 'Slab: 1037300 kB' 'SReclaimable: 280524 kB' 'SUnreclaim: 756776 kB' 'KernelStack: 21888 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8294664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217820 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.179 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.179 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.180 17:07:26 -- setup/common.sh@33 -- # echo 0 00:04:30.180 17:07:26 -- setup/common.sh@33 -- # return 0 00:04:30.180 17:07:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:30.180 17:07:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.180 nr_hugepages=1024 00:04:30.180 17:07:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.180 resv_hugepages=0 00:04:30.180 17:07:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.180 surplus_hugepages=0 00:04:30.180 17:07:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.180 anon_hugepages=0 00:04:30.180 17:07:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.180 17:07:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.180 17:07:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.180 17:07:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.180 17:07:26 -- setup/common.sh@18 -- # local node= 00:04:30.180 17:07:26 -- setup/common.sh@19 -- # local var val 00:04:30.180 17:07:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.180 17:07:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.180 17:07:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.180 17:07:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.180 17:07:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.180 17:07:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43431336 kB' 'MemAvailable: 47155652 kB' 'Buffers: 4100 kB' 'Cached: 10760640 kB' 'SwapCached: 0 kB' 'Active: 7517000 kB' 'Inactive: 3692704 kB' 'Active(anon): 7128560 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448256 kB' 'Mapped: 181132 kB' 'Shmem: 6683596 kB' 'KReclaimable: 280524 kB' 'Slab: 1037300 kB' 'SReclaimable: 280524 kB' 'SUnreclaim: 756776 kB' 'KernelStack: 21872 kB' 'PageTables: 7420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8294676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217820 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.180 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.180 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.442 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.442 17:07:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.443 17:07:26 -- setup/common.sh@33 -- # echo 1024 00:04:30.443 17:07:26 -- setup/common.sh@33 -- # return 0 00:04:30.443 17:07:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.443 17:07:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.443 17:07:26 -- setup/hugepages.sh@27 -- # local node 00:04:30.443 17:07:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.443 17:07:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:30.443 17:07:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.443 17:07:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:30.443 17:07:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.443 17:07:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.443 17:07:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.443 17:07:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.443 17:07:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.443 17:07:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.443 17:07:26 -- setup/common.sh@18 -- # local node=0 00:04:30.443 17:07:26 -- setup/common.sh@19 -- # local var val 00:04:30.443 17:07:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.443 17:07:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.443 17:07:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.443 17:07:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.443 17:07:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.443 17:07:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 27954848 kB' 'MemUsed: 4679588 kB' 'SwapCached: 0 kB' 'Active: 2066228 kB' 'Inactive: 107564 kB' 'Active(anon): 1861464 kB' 'Inactive(anon): 0 kB' 'Active(file): 204764 kB' 'Inactive(file): 107564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1855928 kB' 'Mapped: 146628 kB' 'AnonPages: 321132 kB' 'Shmem: 1543600 kB' 'KernelStack: 11144 kB' 'PageTables: 4788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113008 kB' 'Slab: 437736 kB' 'SReclaimable: 113008 kB' 'SUnreclaim: 324728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.443 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.443 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@33 -- # echo 0 00:04:30.444 17:07:26 -- setup/common.sh@33 -- # return 0 00:04:30.444 17:07:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.444 17:07:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.444 17:07:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.444 17:07:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:30.444 17:07:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.444 17:07:26 -- setup/common.sh@18 -- # local node=1 00:04:30.444 17:07:26 -- setup/common.sh@19 -- # local var val 00:04:30.444 17:07:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.444 17:07:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.444 17:07:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:30.444 17:07:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:30.444 17:07:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.444 17:07:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 15477072 kB' 'MemUsed: 12172288 kB' 'SwapCached: 0 kB' 'Active: 5451080 kB' 'Inactive: 3585140 kB' 'Active(anon): 5267404 kB' 'Inactive(anon): 0 kB' 'Active(file): 183676 kB' 'Inactive(file): 3585140 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8908832 kB' 'Mapped: 34504 kB' 'AnonPages: 127500 kB' 'Shmem: 5140016 kB' 'KernelStack: 10744 kB' 'PageTables: 2644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167516 kB' 'Slab: 599560 kB' 'SReclaimable: 167516 kB' 'SUnreclaim: 432044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.444 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.444 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # continue 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.445 17:07:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.445 17:07:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.445 17:07:26 -- setup/common.sh@33 -- # echo 0 00:04:30.445 17:07:26 -- setup/common.sh@33 -- # return 0 00:04:30.445 17:07:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.445 17:07:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.445 17:07:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.445 17:07:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.445 17:07:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:30.445 node0=512 expecting 512 00:04:30.445 17:07:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.445 17:07:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.445 17:07:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.445 17:07:26 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:30.445 node1=512 expecting 512 00:04:30.445 17:07:26 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:30.445 00:04:30.445 real 0m3.766s 00:04:30.445 user 0m1.394s 00:04:30.445 sys 0m2.441s 00:04:30.445 17:07:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:30.445 17:07:26 -- common/autotest_common.sh@10 -- # set +x 00:04:30.445 ************************************ 00:04:30.445 END TEST even_2G_alloc 00:04:30.445 ************************************ 00:04:30.445 17:07:26 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:30.445 17:07:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.445 17:07:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.445 17:07:26 -- common/autotest_common.sh@10 -- # set +x 00:04:30.445 ************************************ 00:04:30.445 START TEST odd_alloc 00:04:30.445 ************************************ 00:04:30.445 17:07:26 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:30.445 17:07:26 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:30.445 17:07:26 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:30.445 17:07:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:30.445 17:07:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.445 17:07:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:30.445 17:07:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:30.445 17:07:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:30.445 17:07:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.445 17:07:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:30.445 17:07:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:30.445 17:07:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.445 17:07:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.445 17:07:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:30.445 17:07:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:30.445 17:07:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.446 17:07:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:30.446 17:07:26 -- setup/hugepages.sh@83 -- # : 513 00:04:30.446 17:07:26 -- setup/hugepages.sh@84 -- # : 1 00:04:30.446 17:07:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.446 17:07:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:30.446 17:07:26 -- setup/hugepages.sh@83 -- # : 0 00:04:30.446 17:07:26 -- setup/hugepages.sh@84 -- # : 0 00:04:30.446 17:07:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.446 17:07:26 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:30.446 17:07:26 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:30.446 17:07:26 -- setup/hugepages.sh@160 -- # setup output 00:04:30.446 17:07:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.446 17:07:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:33.737 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:33.737 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:33.737 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:33.737 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:33.737 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:34.000 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:34.000 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:34.000 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:34.000 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:34.000 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:34.000 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:34.000 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:34.000 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:34.000 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:34.000 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:34.000 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:34.000 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:34.000 17:07:30 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:34.000 17:07:30 -- setup/hugepages.sh@89 -- # local node 00:04:34.000 17:07:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.000 17:07:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.000 17:07:30 -- setup/hugepages.sh@92 -- # local surp 00:04:34.000 17:07:30 -- setup/hugepages.sh@93 -- # local resv 00:04:34.000 17:07:30 -- setup/hugepages.sh@94 -- # local anon 00:04:34.000 17:07:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.000 17:07:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.000 17:07:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.000 17:07:30 -- setup/common.sh@18 -- # local node= 00:04:34.000 17:07:30 -- setup/common.sh@19 -- # local var val 00:04:34.000 17:07:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.000 17:07:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.000 17:07:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.000 17:07:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.000 17:07:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.000 17:07:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43449724 kB' 'MemAvailable: 47173984 kB' 'Buffers: 4100 kB' 'Cached: 10760756 kB' 'SwapCached: 0 kB' 'Active: 7520732 kB' 'Inactive: 3692704 kB' 'Active(anon): 7132292 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450768 kB' 'Mapped: 181236 kB' 'Shmem: 6683712 kB' 'KReclaimable: 280412 kB' 'Slab: 1036980 kB' 'SReclaimable: 280412 kB' 'SUnreclaim: 756568 kB' 'KernelStack: 21792 kB' 'PageTables: 7476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 8295776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.000 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.000 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.001 17:07:30 -- setup/common.sh@33 -- # echo 0 00:04:34.001 17:07:30 -- setup/common.sh@33 -- # return 0 00:04:34.001 17:07:30 -- setup/hugepages.sh@97 -- # anon=0 00:04:34.001 17:07:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.001 17:07:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.001 17:07:30 -- setup/common.sh@18 -- # local node= 00:04:34.001 17:07:30 -- setup/common.sh@19 -- # local var val 00:04:34.001 17:07:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.001 17:07:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.001 17:07:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.001 17:07:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.001 17:07:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.001 17:07:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43450868 kB' 'MemAvailable: 47175128 kB' 'Buffers: 4100 kB' 'Cached: 10760760 kB' 'SwapCached: 0 kB' 'Active: 7519544 kB' 'Inactive: 3692704 kB' 'Active(anon): 7131104 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450212 kB' 'Mapped: 181220 kB' 'Shmem: 6683716 kB' 'KReclaimable: 280412 kB' 'Slab: 1036932 kB' 'SReclaimable: 280412 kB' 'SUnreclaim: 756520 kB' 'KernelStack: 21888 kB' 'PageTables: 7452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 8295788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.001 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.001 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.002 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.002 17:07:30 -- setup/common.sh@33 -- # echo 0 00:04:34.002 17:07:30 -- setup/common.sh@33 -- # return 0 00:04:34.002 17:07:30 -- setup/hugepages.sh@99 -- # surp=0 00:04:34.002 17:07:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.002 17:07:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.002 17:07:30 -- setup/common.sh@18 -- # local node= 00:04:34.002 17:07:30 -- setup/common.sh@19 -- # local var val 00:04:34.002 17:07:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.002 17:07:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.002 17:07:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.002 17:07:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.002 17:07:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.002 17:07:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.002 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43452444 kB' 'MemAvailable: 47176704 kB' 'Buffers: 4100 kB' 'Cached: 10760760 kB' 'SwapCached: 0 kB' 'Active: 7519464 kB' 'Inactive: 3692704 kB' 'Active(anon): 7131024 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450132 kB' 'Mapped: 181220 kB' 'Shmem: 6683716 kB' 'KReclaimable: 280412 kB' 'Slab: 1036932 kB' 'SReclaimable: 280412 kB' 'SUnreclaim: 756520 kB' 'KernelStack: 21888 kB' 'PageTables: 7452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 8295804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.003 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.003 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.004 17:07:30 -- setup/common.sh@33 -- # echo 0 00:04:34.004 17:07:30 -- setup/common.sh@33 -- # return 0 00:04:34.004 17:07:30 -- setup/hugepages.sh@100 -- # resv=0 00:04:34.004 17:07:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:34.004 nr_hugepages=1025 00:04:34.004 17:07:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.004 resv_hugepages=0 00:04:34.004 17:07:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.004 surplus_hugepages=0 00:04:34.004 17:07:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.004 anon_hugepages=0 00:04:34.004 17:07:30 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:34.004 17:07:30 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:34.004 17:07:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.004 17:07:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.004 17:07:30 -- setup/common.sh@18 -- # local node= 00:04:34.004 17:07:30 -- setup/common.sh@19 -- # local var val 00:04:34.004 17:07:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.004 17:07:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.004 17:07:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.004 17:07:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.004 17:07:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.004 17:07:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43452536 kB' 'MemAvailable: 47176796 kB' 'Buffers: 4100 kB' 'Cached: 10760764 kB' 'SwapCached: 0 kB' 'Active: 7519428 kB' 'Inactive: 3692704 kB' 'Active(anon): 7130988 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450092 kB' 'Mapped: 181220 kB' 'Shmem: 6683720 kB' 'KReclaimable: 280412 kB' 'Slab: 1036932 kB' 'SReclaimable: 280412 kB' 'SUnreclaim: 756520 kB' 'KernelStack: 21856 kB' 'PageTables: 7356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480900 kB' 'Committed_AS: 8295964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.004 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.004 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.005 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.005 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.266 17:07:30 -- setup/common.sh@33 -- # echo 1025 00:04:34.266 17:07:30 -- setup/common.sh@33 -- # return 0 00:04:34.266 17:07:30 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:34.266 17:07:30 -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.266 17:07:30 -- setup/hugepages.sh@27 -- # local node 00:04:34.266 17:07:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.266 17:07:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.266 17:07:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.266 17:07:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:34.266 17:07:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:34.266 17:07:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.266 17:07:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.266 17:07:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.266 17:07:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.266 17:07:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.266 17:07:30 -- setup/common.sh@18 -- # local node=0 00:04:34.266 17:07:30 -- setup/common.sh@19 -- # local var val 00:04:34.266 17:07:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.266 17:07:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.266 17:07:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.266 17:07:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.266 17:07:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.266 17:07:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 27965784 kB' 'MemUsed: 4668652 kB' 'SwapCached: 0 kB' 'Active: 2066928 kB' 'Inactive: 107564 kB' 'Active(anon): 1862164 kB' 'Inactive(anon): 0 kB' 'Active(file): 204764 kB' 'Inactive(file): 107564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1856000 kB' 'Mapped: 146636 kB' 'AnonPages: 321644 kB' 'Shmem: 1543672 kB' 'KernelStack: 11096 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113008 kB' 'Slab: 437456 kB' 'SReclaimable: 113008 kB' 'SUnreclaim: 324448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.266 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.266 17:07:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@33 -- # echo 0 00:04:34.267 17:07:30 -- setup/common.sh@33 -- # return 0 00:04:34.267 17:07:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.267 17:07:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.267 17:07:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.267 17:07:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:34.267 17:07:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.267 17:07:30 -- setup/common.sh@18 -- # local node=1 00:04:34.267 17:07:30 -- setup/common.sh@19 -- # local var val 00:04:34.267 17:07:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.267 17:07:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.267 17:07:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:34.267 17:07:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:34.267 17:07:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.267 17:07:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 15486616 kB' 'MemUsed: 12162744 kB' 'SwapCached: 0 kB' 'Active: 5452176 kB' 'Inactive: 3585140 kB' 'Active(anon): 5268500 kB' 'Inactive(anon): 0 kB' 'Active(file): 183676 kB' 'Inactive(file): 3585140 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8908912 kB' 'Mapped: 34584 kB' 'AnonPages: 128104 kB' 'Shmem: 5140096 kB' 'KernelStack: 10776 kB' 'PageTables: 2836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167404 kB' 'Slab: 599480 kB' 'SReclaimable: 167404 kB' 'SUnreclaim: 432076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.267 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.267 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # continue 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.268 17:07:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.268 17:07:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.268 17:07:30 -- setup/common.sh@33 -- # echo 0 00:04:34.268 17:07:30 -- setup/common.sh@33 -- # return 0 00:04:34.268 17:07:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.268 17:07:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.268 17:07:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.268 17:07:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.268 17:07:30 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:34.268 node0=512 expecting 513 00:04:34.268 17:07:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.268 17:07:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.268 17:07:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.268 17:07:30 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:34.268 node1=513 expecting 512 00:04:34.268 17:07:30 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:34.268 00:04:34.268 real 0m3.776s 00:04:34.268 user 0m1.444s 00:04:34.268 sys 0m2.403s 00:04:34.268 17:07:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:34.268 17:07:30 -- common/autotest_common.sh@10 -- # set +x 00:04:34.268 ************************************ 00:04:34.268 END TEST odd_alloc 00:04:34.268 ************************************ 00:04:34.268 17:07:30 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:34.268 17:07:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.268 17:07:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.269 17:07:30 -- common/autotest_common.sh@10 -- # set +x 00:04:34.269 ************************************ 00:04:34.269 START TEST custom_alloc 00:04:34.269 ************************************ 00:04:34.269 17:07:30 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:34.269 17:07:30 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:34.269 17:07:30 -- setup/hugepages.sh@169 -- # local node 00:04:34.269 17:07:30 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:34.269 17:07:30 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:34.269 17:07:30 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:34.269 17:07:30 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:34.269 17:07:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:34.269 17:07:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:34.269 17:07:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:34.269 17:07:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.269 17:07:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.269 17:07:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:34.269 17:07:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.269 17:07:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.269 17:07:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.269 17:07:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:34.269 17:07:30 -- setup/hugepages.sh@83 -- # : 256 00:04:34.269 17:07:30 -- setup/hugepages.sh@84 -- # : 1 00:04:34.269 17:07:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:34.269 17:07:30 -- setup/hugepages.sh@83 -- # : 0 00:04:34.269 17:07:30 -- setup/hugepages.sh@84 -- # : 0 00:04:34.269 17:07:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:34.269 17:07:30 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:34.269 17:07:30 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:34.269 17:07:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.269 17:07:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:34.269 17:07:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.269 17:07:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.269 17:07:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.269 17:07:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.269 17:07:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.269 17:07:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.269 17:07:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:34.269 17:07:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:34.269 17:07:30 -- setup/hugepages.sh@78 -- # return 0 00:04:34.269 17:07:30 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:34.269 17:07:30 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:34.269 17:07:30 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:34.269 17:07:30 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:34.269 17:07:30 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:34.269 17:07:30 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:34.269 17:07:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.269 17:07:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.269 17:07:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.269 17:07:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.269 17:07:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.269 17:07:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.269 17:07:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:34.269 17:07:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:34.269 17:07:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:34.269 17:07:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:34.269 17:07:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:34.269 17:07:30 -- setup/hugepages.sh@78 -- # return 0 00:04:34.269 17:07:30 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:34.269 17:07:30 -- setup/hugepages.sh@187 -- # setup output 00:04:34.269 17:07:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.269 17:07:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:37.560 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:37.560 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:37.560 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:37.560 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:37.560 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:37.560 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:37.560 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:37.560 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:37.560 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:37.560 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:37.560 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:37.822 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:37.822 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:37.822 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:37.822 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:37.822 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:37.822 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:37.822 17:07:34 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:37.822 17:07:34 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:37.822 17:07:34 -- setup/hugepages.sh@89 -- # local node 00:04:37.822 17:07:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.822 17:07:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.822 17:07:34 -- setup/hugepages.sh@92 -- # local surp 00:04:37.822 17:07:34 -- setup/hugepages.sh@93 -- # local resv 00:04:37.822 17:07:34 -- setup/hugepages.sh@94 -- # local anon 00:04:37.822 17:07:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.822 17:07:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.822 17:07:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.822 17:07:34 -- setup/common.sh@18 -- # local node= 00:04:37.822 17:07:34 -- setup/common.sh@19 -- # local var val 00:04:37.822 17:07:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.822 17:07:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.822 17:07:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.822 17:07:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.822 17:07:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.822 17:07:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.822 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.822 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42394264 kB' 'MemAvailable: 46118524 kB' 'Buffers: 4100 kB' 'Cached: 10760888 kB' 'SwapCached: 0 kB' 'Active: 7517812 kB' 'Inactive: 3692704 kB' 'Active(anon): 7129372 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448820 kB' 'Mapped: 181160 kB' 'Shmem: 6683844 kB' 'KReclaimable: 280412 kB' 'Slab: 1036524 kB' 'SReclaimable: 280412 kB' 'SUnreclaim: 756112 kB' 'KernelStack: 21904 kB' 'PageTables: 7524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 8296432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217916 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.823 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.823 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.824 17:07:34 -- setup/common.sh@33 -- # echo 0 00:04:37.824 17:07:34 -- setup/common.sh@33 -- # return 0 00:04:37.824 17:07:34 -- setup/hugepages.sh@97 -- # anon=0 00:04:37.824 17:07:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.824 17:07:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.824 17:07:34 -- setup/common.sh@18 -- # local node= 00:04:37.824 17:07:34 -- setup/common.sh@19 -- # local var val 00:04:37.824 17:07:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.824 17:07:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.824 17:07:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.824 17:07:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.824 17:07:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.824 17:07:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42395692 kB' 'MemAvailable: 46119952 kB' 'Buffers: 4100 kB' 'Cached: 10760892 kB' 'SwapCached: 0 kB' 'Active: 7518596 kB' 'Inactive: 3692704 kB' 'Active(anon): 7130156 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 449580 kB' 'Mapped: 181144 kB' 'Shmem: 6683848 kB' 'KReclaimable: 280412 kB' 'Slab: 1036624 kB' 'SReclaimable: 280412 kB' 'SUnreclaim: 756212 kB' 'KernelStack: 21888 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 8296444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217916 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.824 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.824 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.825 17:07:34 -- setup/common.sh@33 -- # echo 0 00:04:37.825 17:07:34 -- setup/common.sh@33 -- # return 0 00:04:37.825 17:07:34 -- setup/hugepages.sh@99 -- # surp=0 00:04:37.825 17:07:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.825 17:07:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.825 17:07:34 -- setup/common.sh@18 -- # local node= 00:04:37.825 17:07:34 -- setup/common.sh@19 -- # local var val 00:04:37.825 17:07:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.825 17:07:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.825 17:07:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.825 17:07:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.825 17:07:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.825 17:07:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42396600 kB' 'MemAvailable: 46120860 kB' 'Buffers: 4100 kB' 'Cached: 10760904 kB' 'SwapCached: 0 kB' 'Active: 7518584 kB' 'Inactive: 3692704 kB' 'Active(anon): 7130144 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 449580 kB' 'Mapped: 181144 kB' 'Shmem: 6683860 kB' 'KReclaimable: 280412 kB' 'Slab: 1036624 kB' 'SReclaimable: 280412 kB' 'SUnreclaim: 756212 kB' 'KernelStack: 21888 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 8296460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217916 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.825 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.825 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.826 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.826 17:07:34 -- setup/common.sh@33 -- # echo 0 00:04:37.826 17:07:34 -- setup/common.sh@33 -- # return 0 00:04:37.826 17:07:34 -- setup/hugepages.sh@100 -- # resv=0 00:04:37.826 17:07:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:37.826 nr_hugepages=1536 00:04:37.826 17:07:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.826 resv_hugepages=0 00:04:37.826 17:07:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.826 surplus_hugepages=0 00:04:37.826 17:07:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.826 anon_hugepages=0 00:04:37.826 17:07:34 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:37.826 17:07:34 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:37.826 17:07:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.826 17:07:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.826 17:07:34 -- setup/common.sh@18 -- # local node= 00:04:37.826 17:07:34 -- setup/common.sh@19 -- # local var val 00:04:37.826 17:07:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.826 17:07:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.826 17:07:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.826 17:07:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.826 17:07:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.826 17:07:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.826 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 42396208 kB' 'MemAvailable: 46120468 kB' 'Buffers: 4100 kB' 'Cached: 10760916 kB' 'SwapCached: 0 kB' 'Active: 7518616 kB' 'Inactive: 3692704 kB' 'Active(anon): 7130176 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 449576 kB' 'Mapped: 181144 kB' 'Shmem: 6683872 kB' 'KReclaimable: 280412 kB' 'Slab: 1036624 kB' 'SReclaimable: 280412 kB' 'SUnreclaim: 756212 kB' 'KernelStack: 21888 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957636 kB' 'Committed_AS: 8296472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217916 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # continue 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.827 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.827 17:07:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.087 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.087 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.087 17:07:34 -- setup/common.sh@33 -- # echo 1536 00:04:38.087 17:07:34 -- setup/common.sh@33 -- # return 0 00:04:38.087 17:07:34 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:38.087 17:07:34 -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.087 17:07:34 -- setup/hugepages.sh@27 -- # local node 00:04:38.087 17:07:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.087 17:07:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:38.087 17:07:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.087 17:07:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.087 17:07:34 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.087 17:07:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.087 17:07:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.087 17:07:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.087 17:07:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.087 17:07:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.087 17:07:34 -- setup/common.sh@18 -- # local node=0 00:04:38.087 17:07:34 -- setup/common.sh@19 -- # local var val 00:04:38.087 17:07:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.087 17:07:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.087 17:07:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.087 17:07:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.087 17:07:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.088 17:07:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 27965812 kB' 'MemUsed: 4668624 kB' 'SwapCached: 0 kB' 'Active: 2066132 kB' 'Inactive: 107564 kB' 'Active(anon): 1861368 kB' 'Inactive(anon): 0 kB' 'Active(file): 204764 kB' 'Inactive(file): 107564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1856000 kB' 'Mapped: 146640 kB' 'AnonPages: 320824 kB' 'Shmem: 1543672 kB' 'KernelStack: 11080 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113008 kB' 'Slab: 437020 kB' 'SReclaimable: 113008 kB' 'SUnreclaim: 324012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.088 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.088 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.088 17:07:34 -- setup/common.sh@33 -- # echo 0 00:04:38.088 17:07:34 -- setup/common.sh@33 -- # return 0 00:04:38.088 17:07:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.088 17:07:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.088 17:07:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.088 17:07:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:38.088 17:07:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.089 17:07:34 -- setup/common.sh@18 -- # local node=1 00:04:38.089 17:07:34 -- setup/common.sh@19 -- # local var val 00:04:38.089 17:07:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.089 17:07:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.089 17:07:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:38.089 17:07:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:38.089 17:07:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.089 17:07:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.089 17:07:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649360 kB' 'MemFree: 14429236 kB' 'MemUsed: 13220124 kB' 'SwapCached: 0 kB' 'Active: 5452528 kB' 'Inactive: 3585140 kB' 'Active(anon): 5268852 kB' 'Inactive(anon): 0 kB' 'Active(file): 183676 kB' 'Inactive(file): 3585140 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8909044 kB' 'Mapped: 34504 kB' 'AnonPages: 128760 kB' 'Shmem: 5140228 kB' 'KernelStack: 10808 kB' 'PageTables: 2896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167404 kB' 'Slab: 599604 kB' 'SReclaimable: 167404 kB' 'SUnreclaim: 432200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # continue 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.089 17:07:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.089 17:07:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.089 17:07:34 -- setup/common.sh@33 -- # echo 0 00:04:38.089 17:07:34 -- setup/common.sh@33 -- # return 0 00:04:38.090 17:07:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.090 17:07:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.090 17:07:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.090 17:07:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.090 17:07:34 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:38.090 node0=512 expecting 512 00:04:38.090 17:07:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.090 17:07:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.090 17:07:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.090 17:07:34 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:38.090 node1=1024 expecting 1024 00:04:38.090 17:07:34 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:38.090 00:04:38.090 real 0m3.766s 00:04:38.090 user 0m1.393s 00:04:38.090 sys 0m2.447s 00:04:38.090 17:07:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.090 17:07:34 -- common/autotest_common.sh@10 -- # set +x 00:04:38.090 ************************************ 00:04:38.090 END TEST custom_alloc 00:04:38.090 ************************************ 00:04:38.090 17:07:34 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:38.090 17:07:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.090 17:07:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.090 17:07:34 -- common/autotest_common.sh@10 -- # set +x 00:04:38.090 ************************************ 00:04:38.090 START TEST no_shrink_alloc 00:04:38.090 ************************************ 00:04:38.090 17:07:34 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:38.090 17:07:34 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:38.090 17:07:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:38.090 17:07:34 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:38.090 17:07:34 -- setup/hugepages.sh@51 -- # shift 00:04:38.090 17:07:34 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:38.090 17:07:34 -- setup/hugepages.sh@52 -- # local node_ids 00:04:38.090 17:07:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.090 17:07:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:38.090 17:07:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:38.090 17:07:34 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:38.090 17:07:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.090 17:07:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:38.090 17:07:34 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.090 17:07:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.090 17:07:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.090 17:07:34 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:38.090 17:07:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.090 17:07:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:38.090 17:07:34 -- setup/hugepages.sh@73 -- # return 0 00:04:38.090 17:07:34 -- setup/hugepages.sh@198 -- # setup output 00:04:38.090 17:07:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.090 17:07:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:41.454 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:41.454 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:41.718 17:07:38 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:41.718 17:07:38 -- setup/hugepages.sh@89 -- # local node 00:04:41.718 17:07:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.718 17:07:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.718 17:07:38 -- setup/hugepages.sh@92 -- # local surp 00:04:41.718 17:07:38 -- setup/hugepages.sh@93 -- # local resv 00:04:41.718 17:07:38 -- setup/hugepages.sh@94 -- # local anon 00:04:41.718 17:07:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.718 17:07:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.718 17:07:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.718 17:07:38 -- setup/common.sh@18 -- # local node= 00:04:41.718 17:07:38 -- setup/common.sh@19 -- # local var val 00:04:41.718 17:07:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.718 17:07:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.718 17:07:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.718 17:07:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.718 17:07:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.718 17:07:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43396512 kB' 'MemAvailable: 47120772 kB' 'Buffers: 4100 kB' 'Cached: 10761028 kB' 'SwapCached: 0 kB' 'Active: 7521064 kB' 'Inactive: 3692704 kB' 'Active(anon): 7132624 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451940 kB' 'Mapped: 181576 kB' 'Shmem: 6683984 kB' 'KReclaimable: 280412 kB' 'Slab: 1036040 kB' 'SReclaimable: 280412 kB' 'SUnreclaim: 755628 kB' 'KernelStack: 21936 kB' 'PageTables: 7324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8301916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217932 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.718 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.718 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.719 17:07:38 -- setup/common.sh@33 -- # echo 0 00:04:41.719 17:07:38 -- setup/common.sh@33 -- # return 0 00:04:41.719 17:07:38 -- setup/hugepages.sh@97 -- # anon=0 00:04:41.719 17:07:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.719 17:07:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.719 17:07:38 -- setup/common.sh@18 -- # local node= 00:04:41.719 17:07:38 -- setup/common.sh@19 -- # local var val 00:04:41.719 17:07:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.719 17:07:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.719 17:07:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.719 17:07:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.719 17:07:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.719 17:07:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43398092 kB' 'MemAvailable: 47122352 kB' 'Buffers: 4100 kB' 'Cached: 10761032 kB' 'SwapCached: 0 kB' 'Active: 7519964 kB' 'Inactive: 3692704 kB' 'Active(anon): 7131524 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450880 kB' 'Mapped: 181224 kB' 'Shmem: 6683988 kB' 'KReclaimable: 280412 kB' 'Slab: 1036032 kB' 'SReclaimable: 280412 kB' 'SUnreclaim: 755620 kB' 'KernelStack: 21904 kB' 'PageTables: 7372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8301928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217884 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.719 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.719 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.720 17:07:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.720 17:07:38 -- setup/common.sh@33 -- # echo 0 00:04:41.720 17:07:38 -- setup/common.sh@33 -- # return 0 00:04:41.720 17:07:38 -- setup/hugepages.sh@99 -- # surp=0 00:04:41.720 17:07:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.720 17:07:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.720 17:07:38 -- setup/common.sh@18 -- # local node= 00:04:41.720 17:07:38 -- setup/common.sh@19 -- # local var val 00:04:41.720 17:07:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.720 17:07:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.720 17:07:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.720 17:07:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.720 17:07:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.720 17:07:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.720 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43399512 kB' 'MemAvailable: 47123772 kB' 'Buffers: 4100 kB' 'Cached: 10761044 kB' 'SwapCached: 0 kB' 'Active: 7520444 kB' 'Inactive: 3692704 kB' 'Active(anon): 7132004 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451296 kB' 'Mapped: 181148 kB' 'Shmem: 6684000 kB' 'KReclaimable: 280412 kB' 'Slab: 1035972 kB' 'SReclaimable: 280412 kB' 'SUnreclaim: 755560 kB' 'KernelStack: 21984 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8301780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.721 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.721 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.722 17:07:38 -- setup/common.sh@33 -- # echo 0 00:04:41.722 17:07:38 -- setup/common.sh@33 -- # return 0 00:04:41.722 17:07:38 -- setup/hugepages.sh@100 -- # resv=0 00:04:41.722 17:07:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.722 nr_hugepages=1024 00:04:41.722 17:07:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.722 resv_hugepages=0 00:04:41.722 17:07:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.722 surplus_hugepages=0 00:04:41.722 17:07:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.722 anon_hugepages=0 00:04:41.722 17:07:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.722 17:07:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.722 17:07:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.722 17:07:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.722 17:07:38 -- setup/common.sh@18 -- # local node= 00:04:41.722 17:07:38 -- setup/common.sh@19 -- # local var val 00:04:41.722 17:07:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.722 17:07:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.722 17:07:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.722 17:07:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.722 17:07:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.722 17:07:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43403948 kB' 'MemAvailable: 47128208 kB' 'Buffers: 4100 kB' 'Cached: 10761056 kB' 'SwapCached: 0 kB' 'Active: 7520240 kB' 'Inactive: 3692704 kB' 'Active(anon): 7131800 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451072 kB' 'Mapped: 181148 kB' 'Shmem: 6684012 kB' 'KReclaimable: 280412 kB' 'Slab: 1035940 kB' 'SReclaimable: 280412 kB' 'SUnreclaim: 755528 kB' 'KernelStack: 21888 kB' 'PageTables: 7424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8300440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.722 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.722 17:07:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.723 17:07:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.723 17:07:38 -- setup/common.sh@33 -- # echo 1024 00:04:41.723 17:07:38 -- setup/common.sh@33 -- # return 0 00:04:41.723 17:07:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.723 17:07:38 -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.723 17:07:38 -- setup/hugepages.sh@27 -- # local node 00:04:41.723 17:07:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.723 17:07:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:41.723 17:07:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.723 17:07:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:41.723 17:07:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.723 17:07:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.723 17:07:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.723 17:07:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.723 17:07:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.723 17:07:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.723 17:07:38 -- setup/common.sh@18 -- # local node=0 00:04:41.723 17:07:38 -- setup/common.sh@19 -- # local var val 00:04:41.723 17:07:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.723 17:07:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.723 17:07:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.723 17:07:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.723 17:07:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.723 17:07:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.723 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 26861968 kB' 'MemUsed: 5772468 kB' 'SwapCached: 0 kB' 'Active: 2066980 kB' 'Inactive: 107564 kB' 'Active(anon): 1862216 kB' 'Inactive(anon): 0 kB' 'Active(file): 204764 kB' 'Inactive(file): 107564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1856052 kB' 'Mapped: 146644 kB' 'AnonPages: 321676 kB' 'Shmem: 1543724 kB' 'KernelStack: 11096 kB' 'PageTables: 4680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113008 kB' 'Slab: 436920 kB' 'SReclaimable: 113008 kB' 'SUnreclaim: 323912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # continue 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.724 17:07:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.724 17:07:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.724 17:07:38 -- setup/common.sh@33 -- # echo 0 00:04:41.724 17:07:38 -- setup/common.sh@33 -- # return 0 00:04:41.724 17:07:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.724 17:07:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.724 17:07:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.724 17:07:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.724 17:07:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:41.724 node0=1024 expecting 1024 00:04:41.724 17:07:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:41.724 17:07:38 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:41.724 17:07:38 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:41.724 17:07:38 -- setup/hugepages.sh@202 -- # setup output 00:04:41.724 17:07:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.724 17:07:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:45.925 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:45.925 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:45.925 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:45.925 17:07:41 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:45.925 17:07:41 -- setup/hugepages.sh@89 -- # local node 00:04:45.925 17:07:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.925 17:07:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.925 17:07:41 -- setup/hugepages.sh@92 -- # local surp 00:04:45.925 17:07:41 -- setup/hugepages.sh@93 -- # local resv 00:04:45.925 17:07:41 -- setup/hugepages.sh@94 -- # local anon 00:04:45.925 17:07:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.925 17:07:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.925 17:07:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.925 17:07:41 -- setup/common.sh@18 -- # local node= 00:04:45.925 17:07:41 -- setup/common.sh@19 -- # local var val 00:04:45.925 17:07:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.925 17:07:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.925 17:07:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.925 17:07:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.925 17:07:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.925 17:07:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43376440 kB' 'MemAvailable: 47100616 kB' 'Buffers: 4100 kB' 'Cached: 10761144 kB' 'SwapCached: 0 kB' 'Active: 7520740 kB' 'Inactive: 3692704 kB' 'Active(anon): 7132300 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451036 kB' 'Mapped: 181236 kB' 'Shmem: 6684100 kB' 'KReclaimable: 280244 kB' 'Slab: 1036044 kB' 'SReclaimable: 280244 kB' 'SUnreclaim: 755800 kB' 'KernelStack: 21920 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8297992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217884 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.925 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.925 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.926 17:07:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.926 17:07:41 -- setup/common.sh@33 -- # echo 0 00:04:45.926 17:07:41 -- setup/common.sh@33 -- # return 0 00:04:45.926 17:07:41 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.926 17:07:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.926 17:07:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.926 17:07:41 -- setup/common.sh@18 -- # local node= 00:04:45.926 17:07:41 -- setup/common.sh@19 -- # local var val 00:04:45.926 17:07:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.926 17:07:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.926 17:07:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.926 17:07:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.926 17:07:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.926 17:07:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.926 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43377696 kB' 'MemAvailable: 47101872 kB' 'Buffers: 4100 kB' 'Cached: 10761148 kB' 'SwapCached: 0 kB' 'Active: 7519964 kB' 'Inactive: 3692704 kB' 'Active(anon): 7131524 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450792 kB' 'Mapped: 181156 kB' 'Shmem: 6684104 kB' 'KReclaimable: 280244 kB' 'Slab: 1036036 kB' 'SReclaimable: 280244 kB' 'SUnreclaim: 755792 kB' 'KernelStack: 21904 kB' 'PageTables: 7468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8298004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217836 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.927 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.927 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.928 17:07:41 -- setup/common.sh@33 -- # echo 0 00:04:45.928 17:07:41 -- setup/common.sh@33 -- # return 0 00:04:45.928 17:07:41 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.928 17:07:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.928 17:07:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.928 17:07:41 -- setup/common.sh@18 -- # local node= 00:04:45.928 17:07:41 -- setup/common.sh@19 -- # local var val 00:04:45.928 17:07:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.928 17:07:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.928 17:07:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.928 17:07:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.928 17:07:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.928 17:07:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43378380 kB' 'MemAvailable: 47102556 kB' 'Buffers: 4100 kB' 'Cached: 10761160 kB' 'SwapCached: 0 kB' 'Active: 7519920 kB' 'Inactive: 3692704 kB' 'Active(anon): 7131480 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450764 kB' 'Mapped: 181156 kB' 'Shmem: 6684116 kB' 'KReclaimable: 280244 kB' 'Slab: 1036036 kB' 'SReclaimable: 280244 kB' 'SUnreclaim: 755792 kB' 'KernelStack: 21888 kB' 'PageTables: 7416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8297812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217836 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.928 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.928 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.929 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.929 17:07:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.929 17:07:42 -- setup/common.sh@33 -- # echo 0 00:04:45.929 17:07:42 -- setup/common.sh@33 -- # return 0 00:04:45.929 17:07:42 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.929 17:07:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.929 nr_hugepages=1024 00:04:45.929 17:07:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.929 resv_hugepages=0 00:04:45.929 17:07:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.929 surplus_hugepages=0 00:04:45.929 17:07:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.929 anon_hugepages=0 00:04:45.929 17:07:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.929 17:07:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.929 17:07:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.930 17:07:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.930 17:07:42 -- setup/common.sh@18 -- # local node= 00:04:45.930 17:07:42 -- setup/common.sh@19 -- # local var val 00:04:45.930 17:07:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.930 17:07:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.930 17:07:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.930 17:07:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.930 17:07:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.930 17:07:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.930 17:07:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283796 kB' 'MemFree: 43384036 kB' 'MemAvailable: 47108212 kB' 'Buffers: 4100 kB' 'Cached: 10761172 kB' 'SwapCached: 0 kB' 'Active: 7520056 kB' 'Inactive: 3692704 kB' 'Active(anon): 7131616 kB' 'Inactive(anon): 0 kB' 'Active(file): 388440 kB' 'Inactive(file): 3692704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450908 kB' 'Mapped: 181156 kB' 'Shmem: 6684128 kB' 'KReclaimable: 280244 kB' 'Slab: 1036020 kB' 'SReclaimable: 280244 kB' 'SUnreclaim: 755776 kB' 'KernelStack: 21904 kB' 'PageTables: 7468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481924 kB' 'Committed_AS: 8298036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217804 kB' 'VmallocChunk: 0 kB' 'Percpu: 77056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777012 kB' 'DirectMap2M: 16783360 kB' 'DirectMap1G: 51380224 kB' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.930 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.930 17:07:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.931 17:07:42 -- setup/common.sh@33 -- # echo 1024 00:04:45.931 17:07:42 -- setup/common.sh@33 -- # return 0 00:04:45.931 17:07:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.931 17:07:42 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.931 17:07:42 -- setup/hugepages.sh@27 -- # local node 00:04:45.931 17:07:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.931 17:07:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.931 17:07:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.931 17:07:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:45.931 17:07:42 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:45.931 17:07:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.931 17:07:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.931 17:07:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.931 17:07:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.931 17:07:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.931 17:07:42 -- setup/common.sh@18 -- # local node=0 00:04:45.931 17:07:42 -- setup/common.sh@19 -- # local var val 00:04:45.931 17:07:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.931 17:07:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.931 17:07:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.931 17:07:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.931 17:07:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.931 17:07:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.931 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.931 17:07:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 26851004 kB' 'MemUsed: 5783432 kB' 'SwapCached: 0 kB' 'Active: 2067092 kB' 'Inactive: 107564 kB' 'Active(anon): 1862328 kB' 'Inactive(anon): 0 kB' 'Active(file): 204764 kB' 'Inactive(file): 107564 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1856148 kB' 'Mapped: 146652 kB' 'AnonPages: 321756 kB' 'Shmem: 1543820 kB' 'KernelStack: 11112 kB' 'PageTables: 4684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112992 kB' 'Slab: 437100 kB' 'SReclaimable: 112992 kB' 'SUnreclaim: 324108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.931 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # continue 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.932 17:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.932 17:07:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.932 17:07:42 -- setup/common.sh@33 -- # echo 0 00:04:45.932 17:07:42 -- setup/common.sh@33 -- # return 0 00:04:45.932 17:07:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.932 17:07:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.932 17:07:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.932 17:07:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.932 17:07:42 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.932 node0=1024 expecting 1024 00:04:45.932 17:07:42 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.932 00:04:45.932 real 0m7.476s 00:04:45.932 user 0m2.816s 00:04:45.932 sys 0m4.804s 00:04:45.932 17:07:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.932 17:07:42 -- common/autotest_common.sh@10 -- # set +x 00:04:45.932 ************************************ 00:04:45.932 END TEST no_shrink_alloc 00:04:45.932 ************************************ 00:04:45.932 17:07:42 -- setup/hugepages.sh@217 -- # clear_hp 00:04:45.932 17:07:42 -- setup/hugepages.sh@37 -- # local node hp 00:04:45.932 17:07:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:45.932 17:07:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.932 17:07:42 -- setup/hugepages.sh@41 -- # echo 0 00:04:45.932 17:07:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.932 17:07:42 -- setup/hugepages.sh@41 -- # echo 0 00:04:45.932 17:07:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:45.932 17:07:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.932 17:07:42 -- setup/hugepages.sh@41 -- # echo 0 00:04:45.933 17:07:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.933 17:07:42 -- setup/hugepages.sh@41 -- # echo 0 00:04:45.933 17:07:42 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:45.933 17:07:42 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:45.933 00:04:45.933 real 0m29.017s 00:04:45.933 user 0m10.216s 00:04:45.933 sys 0m17.420s 00:04:45.933 17:07:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.933 17:07:42 -- common/autotest_common.sh@10 -- # set +x 00:04:45.933 ************************************ 00:04:45.933 END TEST hugepages 00:04:45.933 ************************************ 00:04:45.933 17:07:42 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:45.933 17:07:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.933 17:07:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.933 17:07:42 -- common/autotest_common.sh@10 -- # set +x 00:04:45.933 ************************************ 00:04:45.933 START TEST driver 00:04:45.933 ************************************ 00:04:45.933 17:07:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:45.933 * Looking for test storage... 00:04:45.933 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:45.933 17:07:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:45.933 17:07:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:45.933 17:07:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:45.933 17:07:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:45.933 17:07:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:45.933 17:07:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:45.933 17:07:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:45.933 17:07:42 -- scripts/common.sh@335 -- # IFS=.-: 00:04:45.933 17:07:42 -- scripts/common.sh@335 -- # read -ra ver1 00:04:45.933 17:07:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.933 17:07:42 -- scripts/common.sh@336 -- # read -ra ver2 00:04:45.933 17:07:42 -- scripts/common.sh@337 -- # local 'op=<' 00:04:45.933 17:07:42 -- scripts/common.sh@339 -- # ver1_l=2 00:04:45.933 17:07:42 -- scripts/common.sh@340 -- # ver2_l=1 00:04:45.933 17:07:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:45.933 17:07:42 -- scripts/common.sh@343 -- # case "$op" in 00:04:45.933 17:07:42 -- scripts/common.sh@344 -- # : 1 00:04:45.933 17:07:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:45.933 17:07:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.933 17:07:42 -- scripts/common.sh@364 -- # decimal 1 00:04:45.933 17:07:42 -- scripts/common.sh@352 -- # local d=1 00:04:45.933 17:07:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.933 17:07:42 -- scripts/common.sh@354 -- # echo 1 00:04:45.933 17:07:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:45.933 17:07:42 -- scripts/common.sh@365 -- # decimal 2 00:04:45.933 17:07:42 -- scripts/common.sh@352 -- # local d=2 00:04:45.933 17:07:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.933 17:07:42 -- scripts/common.sh@354 -- # echo 2 00:04:45.933 17:07:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:45.933 17:07:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:45.933 17:07:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:45.933 17:07:42 -- scripts/common.sh@367 -- # return 0 00:04:45.933 17:07:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.933 17:07:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:45.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.933 --rc genhtml_branch_coverage=1 00:04:45.933 --rc genhtml_function_coverage=1 00:04:45.933 --rc genhtml_legend=1 00:04:45.933 --rc geninfo_all_blocks=1 00:04:45.933 --rc geninfo_unexecuted_blocks=1 00:04:45.933 00:04:45.933 ' 00:04:45.933 17:07:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:45.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.933 --rc genhtml_branch_coverage=1 00:04:45.933 --rc genhtml_function_coverage=1 00:04:45.933 --rc genhtml_legend=1 00:04:45.933 --rc geninfo_all_blocks=1 00:04:45.933 --rc geninfo_unexecuted_blocks=1 00:04:45.933 00:04:45.933 ' 00:04:45.933 17:07:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:45.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.933 --rc genhtml_branch_coverage=1 00:04:45.933 --rc genhtml_function_coverage=1 00:04:45.933 --rc genhtml_legend=1 00:04:45.933 --rc geninfo_all_blocks=1 00:04:45.933 --rc geninfo_unexecuted_blocks=1 00:04:45.933 00:04:45.933 ' 00:04:45.933 17:07:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:45.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.933 --rc genhtml_branch_coverage=1 00:04:45.933 --rc genhtml_function_coverage=1 00:04:45.933 --rc genhtml_legend=1 00:04:45.933 --rc geninfo_all_blocks=1 00:04:45.933 --rc geninfo_unexecuted_blocks=1 00:04:45.933 00:04:45.933 ' 00:04:45.933 17:07:42 -- setup/driver.sh@68 -- # setup reset 00:04:45.933 17:07:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.933 17:07:42 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:51.211 17:07:47 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:51.211 17:07:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.212 17:07:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.212 17:07:47 -- common/autotest_common.sh@10 -- # set +x 00:04:51.212 ************************************ 00:04:51.212 START TEST guess_driver 00:04:51.212 ************************************ 00:04:51.212 17:07:47 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:51.212 17:07:47 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:51.212 17:07:47 -- setup/driver.sh@47 -- # local fail=0 00:04:51.212 17:07:47 -- setup/driver.sh@49 -- # pick_driver 00:04:51.212 17:07:47 -- setup/driver.sh@36 -- # vfio 00:04:51.212 17:07:47 -- setup/driver.sh@21 -- # local iommu_grups 00:04:51.212 17:07:47 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:51.212 17:07:47 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:51.212 17:07:47 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:51.212 17:07:47 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:51.212 17:07:47 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:51.212 17:07:47 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:51.212 17:07:47 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:51.212 17:07:47 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:51.212 17:07:47 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:51.212 17:07:47 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:51.212 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:51.212 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:51.212 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:51.212 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:51.212 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:51.212 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:51.212 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:51.212 17:07:47 -- setup/driver.sh@30 -- # return 0 00:04:51.212 17:07:47 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:51.212 17:07:47 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:51.212 17:07:47 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:51.212 17:07:47 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:51.212 Looking for driver=vfio-pci 00:04:51.212 17:07:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.212 17:07:47 -- setup/driver.sh@45 -- # setup output config 00:04:51.212 17:07:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.212 17:07:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:54.504 17:07:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.504 17:07:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.504 17:07:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.504 17:07:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.504 17:07:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.504 17:07:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.504 17:07:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.504 17:07:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.504 17:07:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.504 17:07:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.504 17:07:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.504 17:07:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.504 17:07:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.504 17:07:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.504 17:07:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.504 17:07:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.504 17:07:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.504 17:07:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.504 17:07:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.504 17:07:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.504 17:07:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.504 17:07:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.504 17:07:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.504 17:07:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.504 17:07:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.504 17:07:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.505 17:07:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.505 17:07:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.505 17:07:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.505 17:07:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.505 17:07:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.505 17:07:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.505 17:07:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.505 17:07:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.505 17:07:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.505 17:07:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.505 17:07:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.505 17:07:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.505 17:07:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.505 17:07:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.505 17:07:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.505 17:07:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.505 17:07:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.505 17:07:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.505 17:07:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.505 17:07:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.505 17:07:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.505 17:07:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.039 17:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.039 17:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.039 17:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.039 17:07:53 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:57.039 17:07:53 -- setup/driver.sh@65 -- # setup reset 00:04:57.039 17:07:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.039 17:07:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.316 00:05:02.316 real 0m10.796s 00:05:02.316 user 0m2.690s 00:05:02.316 sys 0m5.398s 00:05:02.316 17:07:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.316 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.316 ************************************ 00:05:02.316 END TEST guess_driver 00:05:02.316 ************************************ 00:05:02.316 00:05:02.316 real 0m16.082s 00:05:02.316 user 0m4.170s 00:05:02.316 sys 0m8.382s 00:05:02.316 17:07:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.316 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.316 ************************************ 00:05:02.316 END TEST driver 00:05:02.316 ************************************ 00:05:02.316 17:07:58 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:02.316 17:07:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.316 17:07:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.316 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:05:02.316 ************************************ 00:05:02.316 START TEST devices 00:05:02.316 ************************************ 00:05:02.316 17:07:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:02.316 * Looking for test storage... 00:05:02.316 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:02.316 17:07:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:02.316 17:07:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:02.316 17:07:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:02.316 17:07:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:02.316 17:07:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:02.316 17:07:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:02.316 17:07:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:02.316 17:07:58 -- scripts/common.sh@335 -- # IFS=.-: 00:05:02.316 17:07:58 -- scripts/common.sh@335 -- # read -ra ver1 00:05:02.316 17:07:58 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.316 17:07:58 -- scripts/common.sh@336 -- # read -ra ver2 00:05:02.316 17:07:58 -- scripts/common.sh@337 -- # local 'op=<' 00:05:02.316 17:07:58 -- scripts/common.sh@339 -- # ver1_l=2 00:05:02.316 17:07:58 -- scripts/common.sh@340 -- # ver2_l=1 00:05:02.316 17:07:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:02.316 17:07:58 -- scripts/common.sh@343 -- # case "$op" in 00:05:02.316 17:07:58 -- scripts/common.sh@344 -- # : 1 00:05:02.316 17:07:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:02.316 17:07:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.316 17:07:58 -- scripts/common.sh@364 -- # decimal 1 00:05:02.316 17:07:58 -- scripts/common.sh@352 -- # local d=1 00:05:02.316 17:07:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.316 17:07:58 -- scripts/common.sh@354 -- # echo 1 00:05:02.316 17:07:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:02.316 17:07:58 -- scripts/common.sh@365 -- # decimal 2 00:05:02.316 17:07:58 -- scripts/common.sh@352 -- # local d=2 00:05:02.316 17:07:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.316 17:07:58 -- scripts/common.sh@354 -- # echo 2 00:05:02.316 17:07:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:02.316 17:07:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:02.316 17:07:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:02.316 17:07:58 -- scripts/common.sh@367 -- # return 0 00:05:02.316 17:07:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.316 17:07:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:02.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.316 --rc genhtml_branch_coverage=1 00:05:02.316 --rc genhtml_function_coverage=1 00:05:02.316 --rc genhtml_legend=1 00:05:02.316 --rc geninfo_all_blocks=1 00:05:02.316 --rc geninfo_unexecuted_blocks=1 00:05:02.316 00:05:02.316 ' 00:05:02.316 17:07:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:02.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.316 --rc genhtml_branch_coverage=1 00:05:02.316 --rc genhtml_function_coverage=1 00:05:02.316 --rc genhtml_legend=1 00:05:02.316 --rc geninfo_all_blocks=1 00:05:02.316 --rc geninfo_unexecuted_blocks=1 00:05:02.316 00:05:02.316 ' 00:05:02.316 17:07:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:02.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.317 --rc genhtml_branch_coverage=1 00:05:02.317 --rc genhtml_function_coverage=1 00:05:02.317 --rc genhtml_legend=1 00:05:02.317 --rc geninfo_all_blocks=1 00:05:02.317 --rc geninfo_unexecuted_blocks=1 00:05:02.317 00:05:02.317 ' 00:05:02.317 17:07:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:02.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.317 --rc genhtml_branch_coverage=1 00:05:02.317 --rc genhtml_function_coverage=1 00:05:02.317 --rc genhtml_legend=1 00:05:02.317 --rc geninfo_all_blocks=1 00:05:02.317 --rc geninfo_unexecuted_blocks=1 00:05:02.317 00:05:02.317 ' 00:05:02.317 17:07:58 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:02.317 17:07:58 -- setup/devices.sh@192 -- # setup reset 00:05:02.317 17:07:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.317 17:07:58 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:06.510 17:08:02 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:06.510 17:08:02 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:06.510 17:08:02 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:06.510 17:08:02 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:06.510 17:08:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:06.510 17:08:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:06.510 17:08:02 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:06.510 17:08:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:06.510 17:08:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:06.510 17:08:02 -- setup/devices.sh@196 -- # blocks=() 00:05:06.510 17:08:02 -- setup/devices.sh@196 -- # declare -a blocks 00:05:06.510 17:08:02 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:06.510 17:08:02 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:06.510 17:08:02 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:06.510 17:08:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:06.510 17:08:02 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:06.510 17:08:02 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:06.510 17:08:02 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:05:06.510 17:08:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:06.510 17:08:02 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:06.510 17:08:02 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:06.510 17:08:02 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:06.510 No valid GPT data, bailing 00:05:06.510 17:08:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:06.510 17:08:02 -- scripts/common.sh@393 -- # pt= 00:05:06.510 17:08:02 -- scripts/common.sh@394 -- # return 1 00:05:06.510 17:08:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:06.510 17:08:02 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:06.510 17:08:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:06.510 17:08:02 -- setup/common.sh@80 -- # echo 2000398934016 00:05:06.510 17:08:02 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:06.510 17:08:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:06.510 17:08:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:05:06.510 17:08:02 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:06.510 17:08:02 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:06.510 17:08:02 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:06.510 17:08:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.510 17:08:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.510 17:08:02 -- common/autotest_common.sh@10 -- # set +x 00:05:06.510 ************************************ 00:05:06.510 START TEST nvme_mount 00:05:06.510 ************************************ 00:05:06.510 17:08:02 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:06.510 17:08:02 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:06.510 17:08:02 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:06.510 17:08:02 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.510 17:08:02 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.510 17:08:02 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:06.510 17:08:02 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:06.510 17:08:02 -- setup/common.sh@40 -- # local part_no=1 00:05:06.510 17:08:02 -- setup/common.sh@41 -- # local size=1073741824 00:05:06.510 17:08:02 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:06.510 17:08:02 -- setup/common.sh@44 -- # parts=() 00:05:06.510 17:08:02 -- setup/common.sh@44 -- # local parts 00:05:06.510 17:08:02 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:06.510 17:08:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.510 17:08:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:06.510 17:08:02 -- setup/common.sh@46 -- # (( part++ )) 00:05:06.510 17:08:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.510 17:08:02 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:06.510 17:08:02 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:06.510 17:08:02 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:07.079 Creating new GPT entries in memory. 00:05:07.079 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:07.079 other utilities. 00:05:07.079 17:08:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:07.079 17:08:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.079 17:08:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:07.079 17:08:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:07.079 17:08:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:08.017 Creating new GPT entries in memory. 00:05:08.017 The operation has completed successfully. 00:05:08.017 17:08:04 -- setup/common.sh@57 -- # (( part++ )) 00:05:08.017 17:08:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.017 17:08:04 -- setup/common.sh@62 -- # wait 1161657 00:05:08.017 17:08:04 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.017 17:08:04 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:08.017 17:08:04 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.017 17:08:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:08.017 17:08:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:08.276 17:08:04 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.276 17:08:04 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.276 17:08:04 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:08.276 17:08:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:08.276 17:08:04 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.276 17:08:04 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.276 17:08:04 -- setup/devices.sh@53 -- # local found=0 00:05:08.276 17:08:04 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.276 17:08:04 -- setup/devices.sh@56 -- # : 00:05:08.276 17:08:04 -- setup/devices.sh@59 -- # local pci status 00:05:08.276 17:08:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.276 17:08:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:08.276 17:08:04 -- setup/devices.sh@47 -- # setup output config 00:05:08.276 17:08:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.276 17:08:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:11.567 17:08:08 -- setup/devices.sh@63 -- # found=1 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.567 17:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.567 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.826 17:08:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.826 17:08:08 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:11.826 17:08:08 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.826 17:08:08 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.826 17:08:08 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.826 17:08:08 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:11.827 17:08:08 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.827 17:08:08 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.827 17:08:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:11.827 17:08:08 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:11.827 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:11.827 17:08:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:11.827 17:08:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:12.085 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:12.085 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:12.085 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:12.085 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:12.085 17:08:08 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:12.085 17:08:08 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:12.085 17:08:08 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.085 17:08:08 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:12.085 17:08:08 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:12.085 17:08:08 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.085 17:08:08 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.085 17:08:08 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:12.085 17:08:08 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:12.085 17:08:08 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.085 17:08:08 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.085 17:08:08 -- setup/devices.sh@53 -- # local found=0 00:05:12.085 17:08:08 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.085 17:08:08 -- setup/devices.sh@56 -- # : 00:05:12.085 17:08:08 -- setup/devices.sh@59 -- # local pci status 00:05:12.085 17:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.085 17:08:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:12.085 17:08:08 -- setup/devices.sh@47 -- # setup output config 00:05:12.085 17:08:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.085 17:08:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:15.377 17:08:11 -- setup/devices.sh@63 -- # found=1 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.377 17:08:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.377 17:08:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.637 17:08:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.637 17:08:12 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:15.637 17:08:12 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.637 17:08:12 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.637 17:08:12 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.637 17:08:12 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.637 17:08:12 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:15.637 17:08:12 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:15.637 17:08:12 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:15.637 17:08:12 -- setup/devices.sh@50 -- # local mount_point= 00:05:15.637 17:08:12 -- setup/devices.sh@51 -- # local test_file= 00:05:15.637 17:08:12 -- setup/devices.sh@53 -- # local found=0 00:05:15.637 17:08:12 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:15.637 17:08:12 -- setup/devices.sh@59 -- # local pci status 00:05:15.637 17:08:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.637 17:08:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:15.637 17:08:12 -- setup/devices.sh@47 -- # setup output config 00:05:15.637 17:08:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.637 17:08:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:18.930 17:08:15 -- setup/devices.sh@63 -- # found=1 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.930 17:08:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.930 17:08:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.190 17:08:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.190 17:08:15 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:19.190 17:08:15 -- setup/devices.sh@68 -- # return 0 00:05:19.190 17:08:15 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:19.190 17:08:15 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.190 17:08:15 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:19.190 17:08:15 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:19.190 17:08:15 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:19.190 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:19.190 00:05:19.190 real 0m13.217s 00:05:19.190 user 0m3.955s 00:05:19.190 sys 0m7.227s 00:05:19.190 17:08:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.190 17:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:19.190 ************************************ 00:05:19.190 END TEST nvme_mount 00:05:19.190 ************************************ 00:05:19.190 17:08:15 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:19.190 17:08:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.190 17:08:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.190 17:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:19.190 ************************************ 00:05:19.190 START TEST dm_mount 00:05:19.190 ************************************ 00:05:19.190 17:08:15 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:19.190 17:08:15 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:19.190 17:08:15 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:19.190 17:08:15 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:19.190 17:08:15 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:19.190 17:08:15 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:19.190 17:08:15 -- setup/common.sh@40 -- # local part_no=2 00:05:19.190 17:08:15 -- setup/common.sh@41 -- # local size=1073741824 00:05:19.190 17:08:15 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:19.190 17:08:15 -- setup/common.sh@44 -- # parts=() 00:05:19.190 17:08:15 -- setup/common.sh@44 -- # local parts 00:05:19.190 17:08:15 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:19.190 17:08:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.190 17:08:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:19.190 17:08:15 -- setup/common.sh@46 -- # (( part++ )) 00:05:19.190 17:08:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.190 17:08:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:19.190 17:08:15 -- setup/common.sh@46 -- # (( part++ )) 00:05:19.190 17:08:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.190 17:08:15 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:19.190 17:08:15 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:19.190 17:08:15 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:20.570 Creating new GPT entries in memory. 00:05:20.570 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:20.570 other utilities. 00:05:20.570 17:08:16 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:20.570 17:08:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.570 17:08:16 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.570 17:08:16 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.570 17:08:16 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:21.508 Creating new GPT entries in memory. 00:05:21.508 The operation has completed successfully. 00:05:21.508 17:08:17 -- setup/common.sh@57 -- # (( part++ )) 00:05:21.508 17:08:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.508 17:08:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:21.508 17:08:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:21.508 17:08:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:22.447 The operation has completed successfully. 00:05:22.447 17:08:18 -- setup/common.sh@57 -- # (( part++ )) 00:05:22.447 17:08:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.447 17:08:18 -- setup/common.sh@62 -- # wait 1166353 00:05:22.447 17:08:18 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:22.447 17:08:18 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:22.447 17:08:18 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:22.447 17:08:18 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:22.447 17:08:18 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:22.447 17:08:18 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:22.447 17:08:18 -- setup/devices.sh@161 -- # break 00:05:22.447 17:08:18 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:22.447 17:08:18 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:22.447 17:08:18 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:22.447 17:08:18 -- setup/devices.sh@166 -- # dm=dm-2 00:05:22.447 17:08:18 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:22.447 17:08:18 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:22.447 17:08:18 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:22.447 17:08:18 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:05:22.447 17:08:18 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:22.447 17:08:18 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:22.447 17:08:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:22.447 17:08:19 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:22.447 17:08:19 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:22.447 17:08:19 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:22.447 17:08:19 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:22.447 17:08:19 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:22.447 17:08:19 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:22.447 17:08:19 -- setup/devices.sh@53 -- # local found=0 00:05:22.447 17:08:19 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:22.447 17:08:19 -- setup/devices.sh@56 -- # : 00:05:22.447 17:08:19 -- setup/devices.sh@59 -- # local pci status 00:05:22.447 17:08:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.447 17:08:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:22.447 17:08:19 -- setup/devices.sh@47 -- # setup output config 00:05:22.447 17:08:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.447 17:08:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:25.821 17:08:22 -- setup/devices.sh@63 -- # found=1 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.821 17:08:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:25.821 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.080 17:08:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.080 17:08:22 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:26.080 17:08:22 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:26.080 17:08:22 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:26.080 17:08:22 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:26.080 17:08:22 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:26.080 17:08:22 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:26.080 17:08:22 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:26.080 17:08:22 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:26.080 17:08:22 -- setup/devices.sh@50 -- # local mount_point= 00:05:26.080 17:08:22 -- setup/devices.sh@51 -- # local test_file= 00:05:26.080 17:08:22 -- setup/devices.sh@53 -- # local found=0 00:05:26.080 17:08:22 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:26.080 17:08:22 -- setup/devices.sh@59 -- # local pci status 00:05:26.080 17:08:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.080 17:08:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:26.081 17:08:22 -- setup/devices.sh@47 -- # setup output config 00:05:26.081 17:08:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.081 17:08:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:29.376 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:29.377 17:08:25 -- setup/devices.sh@63 -- # found=1 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.377 17:08:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.377 17:08:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.636 17:08:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.636 17:08:26 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:29.636 17:08:26 -- setup/devices.sh@68 -- # return 0 00:05:29.637 17:08:26 -- setup/devices.sh@187 -- # cleanup_dm 00:05:29.637 17:08:26 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:29.637 17:08:26 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:29.637 17:08:26 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:29.637 17:08:26 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:29.637 17:08:26 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:29.637 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:29.637 17:08:26 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:29.637 17:08:26 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:29.637 00:05:29.637 real 0m10.377s 00:05:29.637 user 0m2.553s 00:05:29.637 sys 0m4.946s 00:05:29.637 17:08:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.637 17:08:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.637 ************************************ 00:05:29.637 END TEST dm_mount 00:05:29.637 ************************************ 00:05:29.637 17:08:26 -- setup/devices.sh@1 -- # cleanup 00:05:29.637 17:08:26 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:29.637 17:08:26 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.637 17:08:26 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:29.637 17:08:26 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:29.637 17:08:26 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:29.637 17:08:26 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:29.896 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:29.896 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:29.896 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:29.896 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:29.896 17:08:26 -- setup/devices.sh@12 -- # cleanup_dm 00:05:29.896 17:08:26 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:29.896 17:08:26 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:29.896 17:08:26 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:29.896 17:08:26 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:29.896 17:08:26 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:29.896 17:08:26 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:29.896 00:05:29.896 real 0m28.193s 00:05:29.896 user 0m8.074s 00:05:29.896 sys 0m15.151s 00:05:29.896 17:08:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.896 17:08:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.896 ************************************ 00:05:29.896 END TEST devices 00:05:29.896 ************************************ 00:05:29.896 00:05:29.896 real 1m39.782s 00:05:29.896 user 0m30.781s 00:05:29.896 sys 0m57.023s 00:05:29.896 17:08:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.896 17:08:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.896 ************************************ 00:05:29.896 END TEST setup.sh 00:05:29.896 ************************************ 00:05:30.156 17:08:26 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:33.450 Hugepages 00:05:33.450 node hugesize free / total 00:05:33.450 node0 1048576kB 0 / 0 00:05:33.450 node0 2048kB 2048 / 2048 00:05:33.450 node1 1048576kB 0 / 0 00:05:33.450 node1 2048kB 0 / 0 00:05:33.450 00:05:33.450 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:33.450 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:33.450 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:33.450 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:33.450 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:33.450 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:33.450 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:33.450 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:33.450 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:33.450 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:33.450 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:33.450 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:33.450 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:33.450 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:33.450 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:33.450 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:33.450 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:33.450 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:33.450 17:08:30 -- spdk/autotest.sh@128 -- # uname -s 00:05:33.450 17:08:30 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:33.450 17:08:30 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:33.450 17:08:30 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:37.647 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:37.647 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:39.558 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:39.558 17:08:35 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:40.497 17:08:36 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:40.497 17:08:36 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:40.497 17:08:36 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:40.497 17:08:36 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:40.497 17:08:36 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:40.497 17:08:36 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:40.497 17:08:36 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.497 17:08:36 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:40.497 17:08:36 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:40.497 17:08:36 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:40.497 17:08:36 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:40.497 17:08:36 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:43.790 Waiting for block devices as requested 00:05:43.790 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:44.049 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:44.049 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:44.049 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:44.309 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:44.309 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:44.309 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:44.569 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:44.569 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:44.569 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:44.828 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:44.828 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:44.828 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:45.088 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:45.088 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:45.088 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:45.347 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:45.347 17:08:41 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:45.347 17:08:41 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:45.347 17:08:41 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:05:45.347 17:08:41 -- common/autotest_common.sh@1497 -- # grep 0000:d8:00.0/nvme/nvme 00:05:45.347 17:08:41 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:45.347 17:08:41 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:45.347 17:08:41 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:45.347 17:08:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:45.347 17:08:41 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:45.347 17:08:41 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:45.347 17:08:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:45.347 17:08:41 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:45.347 17:08:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:45.347 17:08:42 -- common/autotest_common.sh@1540 -- # oacs=' 0xe' 00:05:45.347 17:08:42 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:45.347 17:08:42 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:45.347 17:08:42 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:45.347 17:08:42 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:45.347 17:08:42 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:45.347 17:08:42 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:45.347 17:08:42 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:45.347 17:08:42 -- common/autotest_common.sh@1552 -- # continue 00:05:45.347 17:08:42 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:45.347 17:08:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.347 17:08:42 -- common/autotest_common.sh@10 -- # set +x 00:05:45.607 17:08:42 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:45.607 17:08:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.607 17:08:42 -- common/autotest_common.sh@10 -- # set +x 00:05:45.607 17:08:42 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:48.926 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:48.926 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:48.926 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:48.926 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:48.926 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:48.926 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:48.926 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:48.926 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:49.186 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:49.186 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:49.186 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:49.186 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:49.186 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:49.186 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:49.186 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:49.186 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:51.093 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:51.093 17:08:47 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:51.093 17:08:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.093 17:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:51.093 17:08:47 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:51.093 17:08:47 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:51.093 17:08:47 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:51.093 17:08:47 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:51.093 17:08:47 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:51.093 17:08:47 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:51.093 17:08:47 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:51.093 17:08:47 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:51.093 17:08:47 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:51.093 17:08:47 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:51.093 17:08:47 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:51.353 17:08:47 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:51.353 17:08:47 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:51.353 17:08:47 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:51.353 17:08:47 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:51.353 17:08:47 -- common/autotest_common.sh@1575 -- # device=0x0a54 00:05:51.353 17:08:47 -- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:51.353 17:08:47 -- common/autotest_common.sh@1577 -- # bdfs+=($bdf) 00:05:51.353 17:08:47 -- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:d8:00.0 00:05:51.353 17:08:47 -- common/autotest_common.sh@1587 -- # [[ -z 0000:d8:00.0 ]] 00:05:51.353 17:08:47 -- common/autotest_common.sh@1592 -- # spdk_tgt_pid=1176402 00:05:51.353 17:08:47 -- common/autotest_common.sh@1593 -- # waitforlisten 1176402 00:05:51.353 17:08:47 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.353 17:08:47 -- common/autotest_common.sh@829 -- # '[' -z 1176402 ']' 00:05:51.353 17:08:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.353 17:08:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.353 17:08:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.353 17:08:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.353 17:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:51.353 [2024-12-14 17:08:47.869459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:51.353 [2024-12-14 17:08:47.869520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1176402 ] 00:05:51.353 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.353 [2024-12-14 17:08:47.954635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.353 [2024-12-14 17:08:47.994023] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.353 [2024-12-14 17:08:47.994137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.291 17:08:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.291 17:08:48 -- common/autotest_common.sh@862 -- # return 0 00:05:52.291 17:08:48 -- common/autotest_common.sh@1595 -- # bdf_id=0 00:05:52.291 17:08:48 -- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}" 00:05:52.291 17:08:48 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:55.583 nvme0n1 00:05:55.583 17:08:51 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:55.583 [2024-12-14 17:08:51.860726] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:55.583 request: 00:05:55.583 { 00:05:55.583 "nvme_ctrlr_name": "nvme0", 00:05:55.583 "password": "test", 00:05:55.583 "method": "bdev_nvme_opal_revert", 00:05:55.583 "req_id": 1 00:05:55.583 } 00:05:55.583 Got JSON-RPC error response 00:05:55.583 response: 00:05:55.583 { 00:05:55.583 "code": -32602, 00:05:55.583 "message": "Invalid parameters" 00:05:55.583 } 00:05:55.583 17:08:51 -- common/autotest_common.sh@1599 -- # true 00:05:55.583 17:08:51 -- common/autotest_common.sh@1600 -- # (( ++bdf_id )) 00:05:55.583 17:08:51 -- common/autotest_common.sh@1603 -- # killprocess 1176402 00:05:55.583 17:08:51 -- common/autotest_common.sh@936 -- # '[' -z 1176402 ']' 00:05:55.583 17:08:51 -- common/autotest_common.sh@940 -- # kill -0 1176402 00:05:55.583 17:08:51 -- common/autotest_common.sh@941 -- # uname 00:05:55.583 17:08:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:55.583 17:08:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1176402 00:05:55.583 17:08:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:55.583 17:08:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:55.583 17:08:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1176402' 00:05:55.583 killing process with pid 1176402 00:05:55.583 17:08:51 -- common/autotest_common.sh@955 -- # kill 1176402 00:05:55.583 17:08:51 -- common/autotest_common.sh@960 -- # wait 1176402 00:05:55.583 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.583 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.583 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.584 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:55.585 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:58.122 17:08:54 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:58.122 17:08:54 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:58.122 17:08:54 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:58.122 17:08:54 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:58.122 17:08:54 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:58.122 17:08:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.122 17:08:54 -- common/autotest_common.sh@10 -- # set +x 00:05:58.122 17:08:54 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:58.122 17:08:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.122 17:08:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.122 17:08:54 -- common/autotest_common.sh@10 -- # set +x 00:05:58.122 ************************************ 00:05:58.122 START TEST env 00:05:58.122 ************************************ 00:05:58.122 17:08:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:58.122 * Looking for test storage... 00:05:58.122 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:58.122 17:08:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:58.122 17:08:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:58.122 17:08:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:58.122 17:08:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:58.122 17:08:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:58.122 17:08:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:58.122 17:08:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:58.122 17:08:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:58.122 17:08:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:58.122 17:08:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.122 17:08:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:58.122 17:08:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:58.122 17:08:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:58.122 17:08:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:58.122 17:08:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:58.122 17:08:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:58.122 17:08:54 -- scripts/common.sh@344 -- # : 1 00:05:58.122 17:08:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:58.122 17:08:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.122 17:08:54 -- scripts/common.sh@364 -- # decimal 1 00:05:58.122 17:08:54 -- scripts/common.sh@352 -- # local d=1 00:05:58.122 17:08:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.122 17:08:54 -- scripts/common.sh@354 -- # echo 1 00:05:58.122 17:08:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:58.122 17:08:54 -- scripts/common.sh@365 -- # decimal 2 00:05:58.122 17:08:54 -- scripts/common.sh@352 -- # local d=2 00:05:58.122 17:08:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.122 17:08:54 -- scripts/common.sh@354 -- # echo 2 00:05:58.122 17:08:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:58.122 17:08:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:58.122 17:08:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:58.122 17:08:54 -- scripts/common.sh@367 -- # return 0 00:05:58.122 17:08:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.122 17:08:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:58.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.122 --rc genhtml_branch_coverage=1 00:05:58.122 --rc genhtml_function_coverage=1 00:05:58.122 --rc genhtml_legend=1 00:05:58.122 --rc geninfo_all_blocks=1 00:05:58.122 --rc geninfo_unexecuted_blocks=1 00:05:58.122 00:05:58.122 ' 00:05:58.122 17:08:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:58.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.122 --rc genhtml_branch_coverage=1 00:05:58.122 --rc genhtml_function_coverage=1 00:05:58.122 --rc genhtml_legend=1 00:05:58.122 --rc geninfo_all_blocks=1 00:05:58.122 --rc geninfo_unexecuted_blocks=1 00:05:58.122 00:05:58.122 ' 00:05:58.122 17:08:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:58.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.122 --rc genhtml_branch_coverage=1 00:05:58.122 --rc genhtml_function_coverage=1 00:05:58.122 --rc genhtml_legend=1 00:05:58.122 --rc geninfo_all_blocks=1 00:05:58.122 --rc geninfo_unexecuted_blocks=1 00:05:58.122 00:05:58.122 ' 00:05:58.122 17:08:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:58.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.122 --rc genhtml_branch_coverage=1 00:05:58.122 --rc genhtml_function_coverage=1 00:05:58.122 --rc genhtml_legend=1 00:05:58.122 --rc geninfo_all_blocks=1 00:05:58.122 --rc geninfo_unexecuted_blocks=1 00:05:58.122 00:05:58.122 ' 00:05:58.122 17:08:54 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:58.122 17:08:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.122 17:08:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.122 17:08:54 -- common/autotest_common.sh@10 -- # set +x 00:05:58.122 ************************************ 00:05:58.122 START TEST env_memory 00:05:58.122 ************************************ 00:05:58.122 17:08:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:58.122 00:05:58.122 00:05:58.122 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.122 http://cunit.sourceforge.net/ 00:05:58.122 00:05:58.122 00:05:58.122 Suite: memory 00:05:58.122 Test: alloc and free memory map ...[2024-12-14 17:08:54.752334] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:58.122 passed 00:05:58.123 Test: mem map translation ...[2024-12-14 17:08:54.770287] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:58.123 [2024-12-14 17:08:54.770302] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:58.123 [2024-12-14 17:08:54.770337] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:58.123 [2024-12-14 17:08:54.770345] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:58.123 passed 00:05:58.123 Test: mem map registration ...[2024-12-14 17:08:54.805276] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:58.123 [2024-12-14 17:08:54.805290] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:58.383 passed 00:05:58.383 Test: mem map adjacent registrations ...passed 00:05:58.383 00:05:58.383 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.383 suites 1 1 n/a 0 0 00:05:58.383 tests 4 4 4 0 0 00:05:58.383 asserts 152 152 152 0 n/a 00:05:58.383 00:05:58.383 Elapsed time = 0.131 seconds 00:05:58.383 00:05:58.383 real 0m0.145s 00:05:58.383 user 0m0.134s 00:05:58.383 sys 0m0.010s 00:05:58.383 17:08:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.383 17:08:54 -- common/autotest_common.sh@10 -- # set +x 00:05:58.383 ************************************ 00:05:58.383 END TEST env_memory 00:05:58.383 ************************************ 00:05:58.383 17:08:54 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:58.383 17:08:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.383 17:08:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.383 17:08:54 -- common/autotest_common.sh@10 -- # set +x 00:05:58.383 ************************************ 00:05:58.383 START TEST env_vtophys 00:05:58.383 ************************************ 00:05:58.383 17:08:54 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:58.383 EAL: lib.eal log level changed from notice to debug 00:05:58.383 EAL: Detected lcore 0 as core 0 on socket 0 00:05:58.383 EAL: Detected lcore 1 as core 1 on socket 0 00:05:58.383 EAL: Detected lcore 2 as core 2 on socket 0 00:05:58.383 EAL: Detected lcore 3 as core 3 on socket 0 00:05:58.383 EAL: Detected lcore 4 as core 4 on socket 0 00:05:58.383 EAL: Detected lcore 5 as core 5 on socket 0 00:05:58.383 EAL: Detected lcore 6 as core 6 on socket 0 00:05:58.383 EAL: Detected lcore 7 as core 8 on socket 0 00:05:58.383 EAL: Detected lcore 8 as core 9 on socket 0 00:05:58.383 EAL: Detected lcore 9 as core 10 on socket 0 00:05:58.383 EAL: Detected lcore 10 as core 11 on socket 0 00:05:58.383 EAL: Detected lcore 11 as core 12 on socket 0 00:05:58.383 EAL: Detected lcore 12 as core 13 on socket 0 00:05:58.383 EAL: Detected lcore 13 as core 14 on socket 0 00:05:58.383 EAL: Detected lcore 14 as core 16 on socket 0 00:05:58.383 EAL: Detected lcore 15 as core 17 on socket 0 00:05:58.383 EAL: Detected lcore 16 as core 18 on socket 0 00:05:58.383 EAL: Detected lcore 17 as core 19 on socket 0 00:05:58.383 EAL: Detected lcore 18 as core 20 on socket 0 00:05:58.383 EAL: Detected lcore 19 as core 21 on socket 0 00:05:58.383 EAL: Detected lcore 20 as core 22 on socket 0 00:05:58.383 EAL: Detected lcore 21 as core 24 on socket 0 00:05:58.383 EAL: Detected lcore 22 as core 25 on socket 0 00:05:58.383 EAL: Detected lcore 23 as core 26 on socket 0 00:05:58.383 EAL: Detected lcore 24 as core 27 on socket 0 00:05:58.383 EAL: Detected lcore 25 as core 28 on socket 0 00:05:58.383 EAL: Detected lcore 26 as core 29 on socket 0 00:05:58.383 EAL: Detected lcore 27 as core 30 on socket 0 00:05:58.383 EAL: Detected lcore 28 as core 0 on socket 1 00:05:58.383 EAL: Detected lcore 29 as core 1 on socket 1 00:05:58.383 EAL: Detected lcore 30 as core 2 on socket 1 00:05:58.383 EAL: Detected lcore 31 as core 3 on socket 1 00:05:58.383 EAL: Detected lcore 32 as core 4 on socket 1 00:05:58.383 EAL: Detected lcore 33 as core 5 on socket 1 00:05:58.383 EAL: Detected lcore 34 as core 6 on socket 1 00:05:58.383 EAL: Detected lcore 35 as core 8 on socket 1 00:05:58.383 EAL: Detected lcore 36 as core 9 on socket 1 00:05:58.383 EAL: Detected lcore 37 as core 10 on socket 1 00:05:58.383 EAL: Detected lcore 38 as core 11 on socket 1 00:05:58.383 EAL: Detected lcore 39 as core 12 on socket 1 00:05:58.383 EAL: Detected lcore 40 as core 13 on socket 1 00:05:58.383 EAL: Detected lcore 41 as core 14 on socket 1 00:05:58.383 EAL: Detected lcore 42 as core 16 on socket 1 00:05:58.383 EAL: Detected lcore 43 as core 17 on socket 1 00:05:58.383 EAL: Detected lcore 44 as core 18 on socket 1 00:05:58.383 EAL: Detected lcore 45 as core 19 on socket 1 00:05:58.383 EAL: Detected lcore 46 as core 20 on socket 1 00:05:58.383 EAL: Detected lcore 47 as core 21 on socket 1 00:05:58.383 EAL: Detected lcore 48 as core 22 on socket 1 00:05:58.383 EAL: Detected lcore 49 as core 24 on socket 1 00:05:58.384 EAL: Detected lcore 50 as core 25 on socket 1 00:05:58.384 EAL: Detected lcore 51 as core 26 on socket 1 00:05:58.384 EAL: Detected lcore 52 as core 27 on socket 1 00:05:58.384 EAL: Detected lcore 53 as core 28 on socket 1 00:05:58.384 EAL: Detected lcore 54 as core 29 on socket 1 00:05:58.384 EAL: Detected lcore 55 as core 30 on socket 1 00:05:58.384 EAL: Detected lcore 56 as core 0 on socket 0 00:05:58.384 EAL: Detected lcore 57 as core 1 on socket 0 00:05:58.384 EAL: Detected lcore 58 as core 2 on socket 0 00:05:58.384 EAL: Detected lcore 59 as core 3 on socket 0 00:05:58.384 EAL: Detected lcore 60 as core 4 on socket 0 00:05:58.384 EAL: Detected lcore 61 as core 5 on socket 0 00:05:58.384 EAL: Detected lcore 62 as core 6 on socket 0 00:05:58.384 EAL: Detected lcore 63 as core 8 on socket 0 00:05:58.384 EAL: Detected lcore 64 as core 9 on socket 0 00:05:58.384 EAL: Detected lcore 65 as core 10 on socket 0 00:05:58.384 EAL: Detected lcore 66 as core 11 on socket 0 00:05:58.384 EAL: Detected lcore 67 as core 12 on socket 0 00:05:58.384 EAL: Detected lcore 68 as core 13 on socket 0 00:05:58.384 EAL: Detected lcore 69 as core 14 on socket 0 00:05:58.384 EAL: Detected lcore 70 as core 16 on socket 0 00:05:58.384 EAL: Detected lcore 71 as core 17 on socket 0 00:05:58.384 EAL: Detected lcore 72 as core 18 on socket 0 00:05:58.384 EAL: Detected lcore 73 as core 19 on socket 0 00:05:58.384 EAL: Detected lcore 74 as core 20 on socket 0 00:05:58.384 EAL: Detected lcore 75 as core 21 on socket 0 00:05:58.384 EAL: Detected lcore 76 as core 22 on socket 0 00:05:58.384 EAL: Detected lcore 77 as core 24 on socket 0 00:05:58.384 EAL: Detected lcore 78 as core 25 on socket 0 00:05:58.384 EAL: Detected lcore 79 as core 26 on socket 0 00:05:58.384 EAL: Detected lcore 80 as core 27 on socket 0 00:05:58.384 EAL: Detected lcore 81 as core 28 on socket 0 00:05:58.384 EAL: Detected lcore 82 as core 29 on socket 0 00:05:58.384 EAL: Detected lcore 83 as core 30 on socket 0 00:05:58.384 EAL: Detected lcore 84 as core 0 on socket 1 00:05:58.384 EAL: Detected lcore 85 as core 1 on socket 1 00:05:58.384 EAL: Detected lcore 86 as core 2 on socket 1 00:05:58.384 EAL: Detected lcore 87 as core 3 on socket 1 00:05:58.384 EAL: Detected lcore 88 as core 4 on socket 1 00:05:58.384 EAL: Detected lcore 89 as core 5 on socket 1 00:05:58.384 EAL: Detected lcore 90 as core 6 on socket 1 00:05:58.384 EAL: Detected lcore 91 as core 8 on socket 1 00:05:58.384 EAL: Detected lcore 92 as core 9 on socket 1 00:05:58.384 EAL: Detected lcore 93 as core 10 on socket 1 00:05:58.384 EAL: Detected lcore 94 as core 11 on socket 1 00:05:58.384 EAL: Detected lcore 95 as core 12 on socket 1 00:05:58.384 EAL: Detected lcore 96 as core 13 on socket 1 00:05:58.384 EAL: Detected lcore 97 as core 14 on socket 1 00:05:58.384 EAL: Detected lcore 98 as core 16 on socket 1 00:05:58.384 EAL: Detected lcore 99 as core 17 on socket 1 00:05:58.384 EAL: Detected lcore 100 as core 18 on socket 1 00:05:58.384 EAL: Detected lcore 101 as core 19 on socket 1 00:05:58.384 EAL: Detected lcore 102 as core 20 on socket 1 00:05:58.384 EAL: Detected lcore 103 as core 21 on socket 1 00:05:58.384 EAL: Detected lcore 104 as core 22 on socket 1 00:05:58.384 EAL: Detected lcore 105 as core 24 on socket 1 00:05:58.384 EAL: Detected lcore 106 as core 25 on socket 1 00:05:58.384 EAL: Detected lcore 107 as core 26 on socket 1 00:05:58.384 EAL: Detected lcore 108 as core 27 on socket 1 00:05:58.384 EAL: Detected lcore 109 as core 28 on socket 1 00:05:58.384 EAL: Detected lcore 110 as core 29 on socket 1 00:05:58.384 EAL: Detected lcore 111 as core 30 on socket 1 00:05:58.384 EAL: Maximum logical cores by configuration: 128 00:05:58.384 EAL: Detected CPU lcores: 112 00:05:58.384 EAL: Detected NUMA nodes: 2 00:05:58.384 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:58.384 EAL: Detected shared linkage of DPDK 00:05:58.384 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:58.384 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:58.384 EAL: Registered [vdev] bus. 00:05:58.384 EAL: bus.vdev log level changed from disabled to notice 00:05:58.384 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:58.384 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:58.384 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:58.384 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:58.384 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:58.384 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:58.384 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:58.384 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:58.384 EAL: No shared files mode enabled, IPC will be disabled 00:05:58.384 EAL: No shared files mode enabled, IPC is disabled 00:05:58.384 EAL: Bus pci wants IOVA as 'DC' 00:05:58.384 EAL: Bus vdev wants IOVA as 'DC' 00:05:58.384 EAL: Buses did not request a specific IOVA mode. 00:05:58.384 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:58.384 EAL: Selected IOVA mode 'VA' 00:05:58.384 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.384 EAL: Probing VFIO support... 00:05:58.384 EAL: IOMMU type 1 (Type 1) is supported 00:05:58.384 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:58.384 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:58.384 EAL: VFIO support initialized 00:05:58.384 EAL: Ask a virtual area of 0x2e000 bytes 00:05:58.384 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:58.384 EAL: Setting up physically contiguous memory... 00:05:58.384 EAL: Setting maximum number of open files to 524288 00:05:58.384 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:58.384 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:58.384 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:58.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.384 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:58.384 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.384 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:58.384 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:58.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.384 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:58.384 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.384 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:58.384 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:58.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.384 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:58.384 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.384 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:58.384 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:58.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.384 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:58.384 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.384 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:58.384 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:58.384 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:58.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.384 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:58.384 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:58.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.384 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:58.384 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:58.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.384 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:58.384 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:58.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.384 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:58.384 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:58.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.384 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:58.384 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:58.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.384 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:58.384 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:58.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.384 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:58.385 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:58.385 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.385 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:58.385 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:58.385 EAL: Hugepages will be freed exactly as allocated. 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: TSC frequency is ~2500000 KHz 00:05:58.385 EAL: Main lcore 0 is ready (tid=7f572c795a00;cpuset=[0]) 00:05:58.385 EAL: Trying to obtain current memory policy. 00:05:58.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.385 EAL: Restoring previous memory policy: 0 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was expanded by 2MB 00:05:58.385 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:05:58.385 EAL: probe driver: 8086:37d2 net_i40e 00:05:58.385 EAL: Not managed by a supported kernel driver, skipped 00:05:58.385 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:05:58.385 EAL: probe driver: 8086:37d2 net_i40e 00:05:58.385 EAL: Not managed by a supported kernel driver, skipped 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:58.385 EAL: Mem event callback 'spdk:(nil)' registered 00:05:58.385 00:05:58.385 00:05:58.385 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.385 http://cunit.sourceforge.net/ 00:05:58.385 00:05:58.385 00:05:58.385 Suite: components_suite 00:05:58.385 Test: vtophys_malloc_test ...passed 00:05:58.385 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:58.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.385 EAL: Restoring previous memory policy: 4 00:05:58.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was expanded by 4MB 00:05:58.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was shrunk by 4MB 00:05:58.385 EAL: Trying to obtain current memory policy. 00:05:58.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.385 EAL: Restoring previous memory policy: 4 00:05:58.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was expanded by 6MB 00:05:58.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was shrunk by 6MB 00:05:58.385 EAL: Trying to obtain current memory policy. 00:05:58.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.385 EAL: Restoring previous memory policy: 4 00:05:58.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was expanded by 10MB 00:05:58.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was shrunk by 10MB 00:05:58.385 EAL: Trying to obtain current memory policy. 00:05:58.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.385 EAL: Restoring previous memory policy: 4 00:05:58.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was expanded by 18MB 00:05:58.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was shrunk by 18MB 00:05:58.385 EAL: Trying to obtain current memory policy. 00:05:58.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.385 EAL: Restoring previous memory policy: 4 00:05:58.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was expanded by 34MB 00:05:58.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was shrunk by 34MB 00:05:58.385 EAL: Trying to obtain current memory policy. 00:05:58.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.385 EAL: Restoring previous memory policy: 4 00:05:58.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was expanded by 66MB 00:05:58.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.385 EAL: request: mp_malloc_sync 00:05:58.385 EAL: No shared files mode enabled, IPC is disabled 00:05:58.385 EAL: Heap on socket 0 was shrunk by 66MB 00:05:58.385 EAL: Trying to obtain current memory policy. 00:05:58.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.647 EAL: Restoring previous memory policy: 4 00:05:58.647 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.647 EAL: request: mp_malloc_sync 00:05:58.647 EAL: No shared files mode enabled, IPC is disabled 00:05:58.647 EAL: Heap on socket 0 was expanded by 130MB 00:05:58.647 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.647 EAL: request: mp_malloc_sync 00:05:58.647 EAL: No shared files mode enabled, IPC is disabled 00:05:58.647 EAL: Heap on socket 0 was shrunk by 130MB 00:05:58.647 EAL: Trying to obtain current memory policy. 00:05:58.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.647 EAL: Restoring previous memory policy: 4 00:05:58.647 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.647 EAL: request: mp_malloc_sync 00:05:58.647 EAL: No shared files mode enabled, IPC is disabled 00:05:58.647 EAL: Heap on socket 0 was expanded by 258MB 00:05:58.647 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.647 EAL: request: mp_malloc_sync 00:05:58.647 EAL: No shared files mode enabled, IPC is disabled 00:05:58.647 EAL: Heap on socket 0 was shrunk by 258MB 00:05:58.647 EAL: Trying to obtain current memory policy. 00:05:58.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.907 EAL: Restoring previous memory policy: 4 00:05:58.907 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.907 EAL: request: mp_malloc_sync 00:05:58.907 EAL: No shared files mode enabled, IPC is disabled 00:05:58.907 EAL: Heap on socket 0 was expanded by 514MB 00:05:58.907 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.907 EAL: request: mp_malloc_sync 00:05:58.907 EAL: No shared files mode enabled, IPC is disabled 00:05:58.907 EAL: Heap on socket 0 was shrunk by 514MB 00:05:58.907 EAL: Trying to obtain current memory policy. 00:05:58.907 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.167 EAL: Restoring previous memory policy: 4 00:05:59.167 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.167 EAL: request: mp_malloc_sync 00:05:59.167 EAL: No shared files mode enabled, IPC is disabled 00:05:59.167 EAL: Heap on socket 0 was expanded by 1026MB 00:05:59.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.440 EAL: request: mp_malloc_sync 00:05:59.440 EAL: No shared files mode enabled, IPC is disabled 00:05:59.440 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:59.440 passed 00:05:59.440 00:05:59.440 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.440 suites 1 1 n/a 0 0 00:05:59.440 tests 2 2 2 0 0 00:05:59.440 asserts 497 497 497 0 n/a 00:05:59.440 00:05:59.440 Elapsed time = 0.982 seconds 00:05:59.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.440 EAL: request: mp_malloc_sync 00:05:59.440 EAL: No shared files mode enabled, IPC is disabled 00:05:59.440 EAL: Heap on socket 0 was shrunk by 2MB 00:05:59.440 EAL: No shared files mode enabled, IPC is disabled 00:05:59.440 EAL: No shared files mode enabled, IPC is disabled 00:05:59.440 EAL: No shared files mode enabled, IPC is disabled 00:05:59.440 00:05:59.440 real 0m1.133s 00:05:59.440 user 0m0.647s 00:05:59.440 sys 0m0.449s 00:05:59.440 17:08:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.440 17:08:56 -- common/autotest_common.sh@10 -- # set +x 00:05:59.440 ************************************ 00:05:59.440 END TEST env_vtophys 00:05:59.440 ************************************ 00:05:59.440 17:08:56 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:59.440 17:08:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.440 17:08:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.440 17:08:56 -- common/autotest_common.sh@10 -- # set +x 00:05:59.440 ************************************ 00:05:59.440 START TEST env_pci 00:05:59.440 ************************************ 00:05:59.440 17:08:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:59.440 00:05:59.440 00:05:59.440 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.440 http://cunit.sourceforge.net/ 00:05:59.440 00:05:59.440 00:05:59.440 Suite: pci 00:05:59.440 Test: pci_hook ...[2024-12-14 17:08:56.100469] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1177969 has claimed it 00:05:59.758 EAL: Cannot find device (10000:00:01.0) 00:05:59.758 EAL: Failed to attach device on primary process 00:05:59.758 passed 00:05:59.758 00:05:59.758 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.758 suites 1 1 n/a 0 0 00:05:59.758 tests 1 1 1 0 0 00:05:59.758 asserts 25 25 25 0 n/a 00:05:59.758 00:05:59.758 Elapsed time = 0.035 seconds 00:05:59.758 00:05:59.758 real 0m0.057s 00:05:59.758 user 0m0.018s 00:05:59.758 sys 0m0.039s 00:05:59.758 17:08:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.758 17:08:56 -- common/autotest_common.sh@10 -- # set +x 00:05:59.758 ************************************ 00:05:59.758 END TEST env_pci 00:05:59.758 ************************************ 00:05:59.758 17:08:56 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:59.758 17:08:56 -- env/env.sh@15 -- # uname 00:05:59.758 17:08:56 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:59.758 17:08:56 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:59.758 17:08:56 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.758 17:08:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:59.758 17:08:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.758 17:08:56 -- common/autotest_common.sh@10 -- # set +x 00:05:59.758 ************************************ 00:05:59.758 START TEST env_dpdk_post_init 00:05:59.758 ************************************ 00:05:59.758 17:08:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.758 EAL: Detected CPU lcores: 112 00:05:59.758 EAL: Detected NUMA nodes: 2 00:05:59.758 EAL: Detected shared linkage of DPDK 00:05:59.758 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.758 EAL: Selected IOVA mode 'VA' 00:05:59.758 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.758 EAL: VFIO support initialized 00:05:59.758 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.758 EAL: Using IOMMU type 1 (Type 1) 00:05:59.758 EAL: Ignore mapping IO port bar(1) 00:05:59.758 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:59.758 EAL: Ignore mapping IO port bar(1) 00:05:59.758 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:59.758 EAL: Ignore mapping IO port bar(1) 00:05:59.758 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:59.758 EAL: Ignore mapping IO port bar(1) 00:05:59.759 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:59.759 EAL: Ignore mapping IO port bar(1) 00:05:59.759 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:59.759 EAL: Ignore mapping IO port bar(1) 00:05:59.759 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:59.759 EAL: Ignore mapping IO port bar(1) 00:05:59.759 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:59.759 EAL: Ignore mapping IO port bar(1) 00:05:59.759 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:00.018 EAL: Ignore mapping IO port bar(1) 00:06:00.018 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:00.018 EAL: Ignore mapping IO port bar(1) 00:06:00.018 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:00.018 EAL: Ignore mapping IO port bar(1) 00:06:00.018 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:00.018 EAL: Ignore mapping IO port bar(1) 00:06:00.018 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:00.018 EAL: Ignore mapping IO port bar(1) 00:06:00.018 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:00.018 EAL: Ignore mapping IO port bar(1) 00:06:00.018 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:00.018 EAL: Ignore mapping IO port bar(1) 00:06:00.018 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:00.018 EAL: Ignore mapping IO port bar(1) 00:06:00.018 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:00.956 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:06:05.149 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:06:05.149 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:06:05.149 Starting DPDK initialization... 00:06:05.149 Starting SPDK post initialization... 00:06:05.149 SPDK NVMe probe 00:06:05.149 Attaching to 0000:d8:00.0 00:06:05.149 Attached to 0000:d8:00.0 00:06:05.149 Cleaning up... 00:06:05.149 00:06:05.149 real 0m5.364s 00:06:05.149 user 0m4.002s 00:06:05.149 sys 0m0.413s 00:06:05.149 17:09:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.149 17:09:01 -- common/autotest_common.sh@10 -- # set +x 00:06:05.149 ************************************ 00:06:05.149 END TEST env_dpdk_post_init 00:06:05.149 ************************************ 00:06:05.149 17:09:01 -- env/env.sh@26 -- # uname 00:06:05.149 17:09:01 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:05.149 17:09:01 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.149 17:09:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.149 17:09:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.149 17:09:01 -- common/autotest_common.sh@10 -- # set +x 00:06:05.149 ************************************ 00:06:05.149 START TEST env_mem_callbacks 00:06:05.149 ************************************ 00:06:05.149 17:09:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.149 EAL: Detected CPU lcores: 112 00:06:05.149 EAL: Detected NUMA nodes: 2 00:06:05.149 EAL: Detected shared linkage of DPDK 00:06:05.149 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:05.149 EAL: Selected IOVA mode 'VA' 00:06:05.149 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.149 EAL: VFIO support initialized 00:06:05.149 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:05.149 00:06:05.149 00:06:05.149 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.149 http://cunit.sourceforge.net/ 00:06:05.149 00:06:05.149 00:06:05.149 Suite: memory 00:06:05.149 Test: test ... 00:06:05.149 register 0x200000200000 2097152 00:06:05.149 malloc 3145728 00:06:05.149 register 0x200000400000 4194304 00:06:05.149 buf 0x200000500000 len 3145728 PASSED 00:06:05.149 malloc 64 00:06:05.149 buf 0x2000004fff40 len 64 PASSED 00:06:05.149 malloc 4194304 00:06:05.149 register 0x200000800000 6291456 00:06:05.149 buf 0x200000a00000 len 4194304 PASSED 00:06:05.149 free 0x200000500000 3145728 00:06:05.149 free 0x2000004fff40 64 00:06:05.149 unregister 0x200000400000 4194304 PASSED 00:06:05.149 free 0x200000a00000 4194304 00:06:05.149 unregister 0x200000800000 6291456 PASSED 00:06:05.149 malloc 8388608 00:06:05.149 register 0x200000400000 10485760 00:06:05.149 buf 0x200000600000 len 8388608 PASSED 00:06:05.149 free 0x200000600000 8388608 00:06:05.149 unregister 0x200000400000 10485760 PASSED 00:06:05.149 passed 00:06:05.149 00:06:05.149 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.149 suites 1 1 n/a 0 0 00:06:05.149 tests 1 1 1 0 0 00:06:05.149 asserts 15 15 15 0 n/a 00:06:05.149 00:06:05.149 Elapsed time = 0.008 seconds 00:06:05.149 00:06:05.149 real 0m0.069s 00:06:05.149 user 0m0.021s 00:06:05.149 sys 0m0.047s 00:06:05.149 17:09:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.149 17:09:01 -- common/autotest_common.sh@10 -- # set +x 00:06:05.149 ************************************ 00:06:05.149 END TEST env_mem_callbacks 00:06:05.149 ************************************ 00:06:05.149 00:06:05.149 real 0m7.214s 00:06:05.149 user 0m4.998s 00:06:05.149 sys 0m1.288s 00:06:05.149 17:09:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.149 17:09:01 -- common/autotest_common.sh@10 -- # set +x 00:06:05.149 ************************************ 00:06:05.149 END TEST env 00:06:05.149 ************************************ 00:06:05.149 17:09:01 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:05.149 17:09:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.149 17:09:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.149 17:09:01 -- common/autotest_common.sh@10 -- # set +x 00:06:05.149 ************************************ 00:06:05.149 START TEST rpc 00:06:05.149 ************************************ 00:06:05.149 17:09:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:05.408 * Looking for test storage... 00:06:05.408 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:05.408 17:09:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:05.408 17:09:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:05.408 17:09:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:05.408 17:09:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:05.408 17:09:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:05.408 17:09:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:05.408 17:09:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:05.408 17:09:01 -- scripts/common.sh@335 -- # IFS=.-: 00:06:05.408 17:09:01 -- scripts/common.sh@335 -- # read -ra ver1 00:06:05.408 17:09:01 -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.408 17:09:01 -- scripts/common.sh@336 -- # read -ra ver2 00:06:05.408 17:09:01 -- scripts/common.sh@337 -- # local 'op=<' 00:06:05.408 17:09:01 -- scripts/common.sh@339 -- # ver1_l=2 00:06:05.408 17:09:01 -- scripts/common.sh@340 -- # ver2_l=1 00:06:05.408 17:09:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:05.408 17:09:01 -- scripts/common.sh@343 -- # case "$op" in 00:06:05.408 17:09:01 -- scripts/common.sh@344 -- # : 1 00:06:05.408 17:09:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:05.408 17:09:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.408 17:09:01 -- scripts/common.sh@364 -- # decimal 1 00:06:05.408 17:09:01 -- scripts/common.sh@352 -- # local d=1 00:06:05.408 17:09:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.408 17:09:01 -- scripts/common.sh@354 -- # echo 1 00:06:05.408 17:09:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:05.408 17:09:01 -- scripts/common.sh@365 -- # decimal 2 00:06:05.408 17:09:01 -- scripts/common.sh@352 -- # local d=2 00:06:05.408 17:09:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.408 17:09:01 -- scripts/common.sh@354 -- # echo 2 00:06:05.408 17:09:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:05.408 17:09:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:05.408 17:09:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:05.408 17:09:01 -- scripts/common.sh@367 -- # return 0 00:06:05.408 17:09:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.408 17:09:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:05.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.408 --rc genhtml_branch_coverage=1 00:06:05.408 --rc genhtml_function_coverage=1 00:06:05.408 --rc genhtml_legend=1 00:06:05.408 --rc geninfo_all_blocks=1 00:06:05.408 --rc geninfo_unexecuted_blocks=1 00:06:05.408 00:06:05.408 ' 00:06:05.408 17:09:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:05.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.408 --rc genhtml_branch_coverage=1 00:06:05.408 --rc genhtml_function_coverage=1 00:06:05.408 --rc genhtml_legend=1 00:06:05.408 --rc geninfo_all_blocks=1 00:06:05.408 --rc geninfo_unexecuted_blocks=1 00:06:05.408 00:06:05.408 ' 00:06:05.408 17:09:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:05.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.408 --rc genhtml_branch_coverage=1 00:06:05.408 --rc genhtml_function_coverage=1 00:06:05.408 --rc genhtml_legend=1 00:06:05.408 --rc geninfo_all_blocks=1 00:06:05.408 --rc geninfo_unexecuted_blocks=1 00:06:05.408 00:06:05.408 ' 00:06:05.408 17:09:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:05.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.408 --rc genhtml_branch_coverage=1 00:06:05.408 --rc genhtml_function_coverage=1 00:06:05.408 --rc genhtml_legend=1 00:06:05.408 --rc geninfo_all_blocks=1 00:06:05.408 --rc geninfo_unexecuted_blocks=1 00:06:05.408 00:06:05.408 ' 00:06:05.408 17:09:01 -- rpc/rpc.sh@65 -- # spdk_pid=1179267 00:06:05.408 17:09:01 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.408 17:09:01 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:05.408 17:09:01 -- rpc/rpc.sh@67 -- # waitforlisten 1179267 00:06:05.408 17:09:01 -- common/autotest_common.sh@829 -- # '[' -z 1179267 ']' 00:06:05.408 17:09:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.408 17:09:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.408 17:09:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.408 17:09:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.408 17:09:01 -- common/autotest_common.sh@10 -- # set +x 00:06:05.408 [2024-12-14 17:09:02.016814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:05.408 [2024-12-14 17:09:02.016873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179267 ] 00:06:05.408 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.667 [2024-12-14 17:09:02.100512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.667 [2024-12-14 17:09:02.138175] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.667 [2024-12-14 17:09:02.138289] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:05.667 [2024-12-14 17:09:02.138300] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1179267' to capture a snapshot of events at runtime. 00:06:05.667 [2024-12-14 17:09:02.138308] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1179267 for offline analysis/debug. 00:06:05.667 [2024-12-14 17:09:02.138336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.236 17:09:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.236 17:09:02 -- common/autotest_common.sh@862 -- # return 0 00:06:06.236 17:09:02 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:06.236 17:09:02 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:06.236 17:09:02 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:06.236 17:09:02 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:06.236 17:09:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.236 17:09:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.236 17:09:02 -- common/autotest_common.sh@10 -- # set +x 00:06:06.236 ************************************ 00:06:06.236 START TEST rpc_integrity 00:06:06.236 ************************************ 00:06:06.236 17:09:02 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:06.236 17:09:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:06.236 17:09:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.236 17:09:02 -- common/autotest_common.sh@10 -- # set +x 00:06:06.236 17:09:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.236 17:09:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:06.236 17:09:02 -- rpc/rpc.sh@13 -- # jq length 00:06:06.236 17:09:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:06.236 17:09:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:06.236 17:09:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.236 17:09:02 -- common/autotest_common.sh@10 -- # set +x 00:06:06.236 17:09:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.236 17:09:02 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:06.236 17:09:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:06.236 17:09:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.236 17:09:02 -- common/autotest_common.sh@10 -- # set +x 00:06:06.236 17:09:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.236 17:09:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:06.236 { 00:06:06.236 "name": "Malloc0", 00:06:06.236 "aliases": [ 00:06:06.236 "e70b40cc-a12b-4ff8-803d-cd98e86b62d3" 00:06:06.236 ], 00:06:06.236 "product_name": "Malloc disk", 00:06:06.236 "block_size": 512, 00:06:06.236 "num_blocks": 16384, 00:06:06.236 "uuid": "e70b40cc-a12b-4ff8-803d-cd98e86b62d3", 00:06:06.236 "assigned_rate_limits": { 00:06:06.236 "rw_ios_per_sec": 0, 00:06:06.236 "rw_mbytes_per_sec": 0, 00:06:06.236 "r_mbytes_per_sec": 0, 00:06:06.236 "w_mbytes_per_sec": 0 00:06:06.236 }, 00:06:06.236 "claimed": false, 00:06:06.236 "zoned": false, 00:06:06.236 "supported_io_types": { 00:06:06.236 "read": true, 00:06:06.236 "write": true, 00:06:06.236 "unmap": true, 00:06:06.236 "write_zeroes": true, 00:06:06.236 "flush": true, 00:06:06.236 "reset": true, 00:06:06.236 "compare": false, 00:06:06.236 "compare_and_write": false, 00:06:06.236 "abort": true, 00:06:06.236 "nvme_admin": false, 00:06:06.236 "nvme_io": false 00:06:06.236 }, 00:06:06.236 "memory_domains": [ 00:06:06.236 { 00:06:06.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.236 "dma_device_type": 2 00:06:06.236 } 00:06:06.236 ], 00:06:06.236 "driver_specific": {} 00:06:06.236 } 00:06:06.236 ]' 00:06:06.236 17:09:02 -- rpc/rpc.sh@17 -- # jq length 00:06:06.495 17:09:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:06.495 17:09:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:06.495 17:09:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.495 17:09:02 -- common/autotest_common.sh@10 -- # set +x 00:06:06.495 [2024-12-14 17:09:02.953081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:06.495 [2024-12-14 17:09:02.953114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:06.495 [2024-12-14 17:09:02.953127] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x153c280 00:06:06.495 [2024-12-14 17:09:02.953136] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:06.495 [2024-12-14 17:09:02.954136] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:06.495 [2024-12-14 17:09:02.954158] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:06.495 Passthru0 00:06:06.495 17:09:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.495 17:09:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:06.495 17:09:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.495 17:09:02 -- common/autotest_common.sh@10 -- # set +x 00:06:06.495 17:09:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.495 17:09:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:06.495 { 00:06:06.495 "name": "Malloc0", 00:06:06.495 "aliases": [ 00:06:06.495 "e70b40cc-a12b-4ff8-803d-cd98e86b62d3" 00:06:06.495 ], 00:06:06.495 "product_name": "Malloc disk", 00:06:06.495 "block_size": 512, 00:06:06.495 "num_blocks": 16384, 00:06:06.495 "uuid": "e70b40cc-a12b-4ff8-803d-cd98e86b62d3", 00:06:06.495 "assigned_rate_limits": { 00:06:06.495 "rw_ios_per_sec": 0, 00:06:06.495 "rw_mbytes_per_sec": 0, 00:06:06.495 "r_mbytes_per_sec": 0, 00:06:06.495 "w_mbytes_per_sec": 0 00:06:06.495 }, 00:06:06.495 "claimed": true, 00:06:06.495 "claim_type": "exclusive_write", 00:06:06.495 "zoned": false, 00:06:06.495 "supported_io_types": { 00:06:06.495 "read": true, 00:06:06.495 "write": true, 00:06:06.495 "unmap": true, 00:06:06.495 "write_zeroes": true, 00:06:06.495 "flush": true, 00:06:06.495 "reset": true, 00:06:06.495 "compare": false, 00:06:06.495 "compare_and_write": false, 00:06:06.495 "abort": true, 00:06:06.495 "nvme_admin": false, 00:06:06.495 "nvme_io": false 00:06:06.495 }, 00:06:06.495 "memory_domains": [ 00:06:06.495 { 00:06:06.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.496 "dma_device_type": 2 00:06:06.496 } 00:06:06.496 ], 00:06:06.496 "driver_specific": {} 00:06:06.496 }, 00:06:06.496 { 00:06:06.496 "name": "Passthru0", 00:06:06.496 "aliases": [ 00:06:06.496 "2170c399-4721-5ba0-ace9-4a8b4ce85d0d" 00:06:06.496 ], 00:06:06.496 "product_name": "passthru", 00:06:06.496 "block_size": 512, 00:06:06.496 "num_blocks": 16384, 00:06:06.496 "uuid": "2170c399-4721-5ba0-ace9-4a8b4ce85d0d", 00:06:06.496 "assigned_rate_limits": { 00:06:06.496 "rw_ios_per_sec": 0, 00:06:06.496 "rw_mbytes_per_sec": 0, 00:06:06.496 "r_mbytes_per_sec": 0, 00:06:06.496 "w_mbytes_per_sec": 0 00:06:06.496 }, 00:06:06.496 "claimed": false, 00:06:06.496 "zoned": false, 00:06:06.496 "supported_io_types": { 00:06:06.496 "read": true, 00:06:06.496 "write": true, 00:06:06.496 "unmap": true, 00:06:06.496 "write_zeroes": true, 00:06:06.496 "flush": true, 00:06:06.496 "reset": true, 00:06:06.496 "compare": false, 00:06:06.496 "compare_and_write": false, 00:06:06.496 "abort": true, 00:06:06.496 "nvme_admin": false, 00:06:06.496 "nvme_io": false 00:06:06.496 }, 00:06:06.496 "memory_domains": [ 00:06:06.496 { 00:06:06.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.496 "dma_device_type": 2 00:06:06.496 } 00:06:06.496 ], 00:06:06.496 "driver_specific": { 00:06:06.496 "passthru": { 00:06:06.496 "name": "Passthru0", 00:06:06.496 "base_bdev_name": "Malloc0" 00:06:06.496 } 00:06:06.496 } 00:06:06.496 } 00:06:06.496 ]' 00:06:06.496 17:09:02 -- rpc/rpc.sh@21 -- # jq length 00:06:06.496 17:09:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:06.496 17:09:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:06.496 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.496 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.496 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.496 17:09:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:06.496 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.496 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.496 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.496 17:09:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:06.496 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.496 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.496 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.496 17:09:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:06.496 17:09:03 -- rpc/rpc.sh@26 -- # jq length 00:06:06.496 17:09:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.496 00:06:06.496 real 0m0.280s 00:06:06.496 user 0m0.171s 00:06:06.496 sys 0m0.048s 00:06:06.496 17:09:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.496 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.496 ************************************ 00:06:06.496 END TEST rpc_integrity 00:06:06.496 ************************************ 00:06:06.496 17:09:03 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:06.496 17:09:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.496 17:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.496 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.496 ************************************ 00:06:06.496 START TEST rpc_plugins 00:06:06.496 ************************************ 00:06:06.496 17:09:03 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:06:06.496 17:09:03 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:06.496 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.496 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.496 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.496 17:09:03 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:06.496 17:09:03 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:06.496 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.496 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.755 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.755 17:09:03 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:06.755 { 00:06:06.755 "name": "Malloc1", 00:06:06.755 "aliases": [ 00:06:06.755 "b3c84b62-24b6-4cc7-bcbc-0df4d22cd4f0" 00:06:06.755 ], 00:06:06.755 "product_name": "Malloc disk", 00:06:06.755 "block_size": 4096, 00:06:06.755 "num_blocks": 256, 00:06:06.755 "uuid": "b3c84b62-24b6-4cc7-bcbc-0df4d22cd4f0", 00:06:06.755 "assigned_rate_limits": { 00:06:06.755 "rw_ios_per_sec": 0, 00:06:06.755 "rw_mbytes_per_sec": 0, 00:06:06.755 "r_mbytes_per_sec": 0, 00:06:06.755 "w_mbytes_per_sec": 0 00:06:06.755 }, 00:06:06.755 "claimed": false, 00:06:06.755 "zoned": false, 00:06:06.755 "supported_io_types": { 00:06:06.755 "read": true, 00:06:06.755 "write": true, 00:06:06.755 "unmap": true, 00:06:06.755 "write_zeroes": true, 00:06:06.755 "flush": true, 00:06:06.755 "reset": true, 00:06:06.755 "compare": false, 00:06:06.755 "compare_and_write": false, 00:06:06.755 "abort": true, 00:06:06.755 "nvme_admin": false, 00:06:06.755 "nvme_io": false 00:06:06.755 }, 00:06:06.755 "memory_domains": [ 00:06:06.755 { 00:06:06.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.755 "dma_device_type": 2 00:06:06.755 } 00:06:06.755 ], 00:06:06.755 "driver_specific": {} 00:06:06.755 } 00:06:06.755 ]' 00:06:06.755 17:09:03 -- rpc/rpc.sh@32 -- # jq length 00:06:06.755 17:09:03 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:06.755 17:09:03 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:06.755 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.755 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.755 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.755 17:09:03 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:06.755 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.755 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.755 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.755 17:09:03 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:06.755 17:09:03 -- rpc/rpc.sh@36 -- # jq length 00:06:06.755 17:09:03 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:06.755 00:06:06.755 real 0m0.142s 00:06:06.755 user 0m0.084s 00:06:06.755 sys 0m0.025s 00:06:06.755 17:09:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.755 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.755 ************************************ 00:06:06.755 END TEST rpc_plugins 00:06:06.755 ************************************ 00:06:06.755 17:09:03 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:06.755 17:09:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.755 17:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.755 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.755 ************************************ 00:06:06.755 START TEST rpc_trace_cmd_test 00:06:06.755 ************************************ 00:06:06.755 17:09:03 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:06:06.755 17:09:03 -- rpc/rpc.sh@40 -- # local info 00:06:06.755 17:09:03 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:06.755 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.755 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.755 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.755 17:09:03 -- rpc/rpc.sh@42 -- # info='{ 00:06:06.755 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1179267", 00:06:06.755 "tpoint_group_mask": "0x8", 00:06:06.755 "iscsi_conn": { 00:06:06.755 "mask": "0x2", 00:06:06.755 "tpoint_mask": "0x0" 00:06:06.755 }, 00:06:06.755 "scsi": { 00:06:06.755 "mask": "0x4", 00:06:06.755 "tpoint_mask": "0x0" 00:06:06.755 }, 00:06:06.755 "bdev": { 00:06:06.755 "mask": "0x8", 00:06:06.755 "tpoint_mask": "0xffffffffffffffff" 00:06:06.755 }, 00:06:06.755 "nvmf_rdma": { 00:06:06.755 "mask": "0x10", 00:06:06.755 "tpoint_mask": "0x0" 00:06:06.755 }, 00:06:06.755 "nvmf_tcp": { 00:06:06.755 "mask": "0x20", 00:06:06.755 "tpoint_mask": "0x0" 00:06:06.755 }, 00:06:06.755 "ftl": { 00:06:06.755 "mask": "0x40", 00:06:06.755 "tpoint_mask": "0x0" 00:06:06.755 }, 00:06:06.755 "blobfs": { 00:06:06.755 "mask": "0x80", 00:06:06.755 "tpoint_mask": "0x0" 00:06:06.755 }, 00:06:06.755 "dsa": { 00:06:06.755 "mask": "0x200", 00:06:06.755 "tpoint_mask": "0x0" 00:06:06.755 }, 00:06:06.755 "thread": { 00:06:06.755 "mask": "0x400", 00:06:06.755 "tpoint_mask": "0x0" 00:06:06.755 }, 00:06:06.755 "nvme_pcie": { 00:06:06.755 "mask": "0x800", 00:06:06.755 "tpoint_mask": "0x0" 00:06:06.755 }, 00:06:06.755 "iaa": { 00:06:06.755 "mask": "0x1000", 00:06:06.755 "tpoint_mask": "0x0" 00:06:06.755 }, 00:06:06.755 "nvme_tcp": { 00:06:06.755 "mask": "0x2000", 00:06:06.755 "tpoint_mask": "0x0" 00:06:06.755 }, 00:06:06.755 "bdev_nvme": { 00:06:06.755 "mask": "0x4000", 00:06:06.755 "tpoint_mask": "0x0" 00:06:06.755 } 00:06:06.755 }' 00:06:06.755 17:09:03 -- rpc/rpc.sh@43 -- # jq length 00:06:06.755 17:09:03 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:06.755 17:09:03 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:07.014 17:09:03 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:07.014 17:09:03 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:07.014 17:09:03 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:07.014 17:09:03 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:07.014 17:09:03 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:07.014 17:09:03 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:07.014 17:09:03 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:07.014 00:06:07.014 real 0m0.202s 00:06:07.014 user 0m0.152s 00:06:07.014 sys 0m0.040s 00:06:07.014 17:09:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.014 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.014 ************************************ 00:06:07.014 END TEST rpc_trace_cmd_test 00:06:07.014 ************************************ 00:06:07.014 17:09:03 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:07.014 17:09:03 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:07.014 17:09:03 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:07.014 17:09:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.014 17:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.014 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.014 ************************************ 00:06:07.014 START TEST rpc_daemon_integrity 00:06:07.014 ************************************ 00:06:07.014 17:09:03 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:07.014 17:09:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:07.014 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.014 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.014 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.014 17:09:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:07.014 17:09:03 -- rpc/rpc.sh@13 -- # jq length 00:06:07.014 17:09:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:07.014 17:09:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:07.014 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.014 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.014 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.014 17:09:03 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:07.014 17:09:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:07.014 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.014 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.014 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.014 17:09:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:07.014 { 00:06:07.015 "name": "Malloc2", 00:06:07.015 "aliases": [ 00:06:07.015 "4676c5af-e39a-4cb6-8bf1-6b3094caa951" 00:06:07.015 ], 00:06:07.015 "product_name": "Malloc disk", 00:06:07.015 "block_size": 512, 00:06:07.015 "num_blocks": 16384, 00:06:07.015 "uuid": "4676c5af-e39a-4cb6-8bf1-6b3094caa951", 00:06:07.015 "assigned_rate_limits": { 00:06:07.015 "rw_ios_per_sec": 0, 00:06:07.015 "rw_mbytes_per_sec": 0, 00:06:07.015 "r_mbytes_per_sec": 0, 00:06:07.015 "w_mbytes_per_sec": 0 00:06:07.015 }, 00:06:07.015 "claimed": false, 00:06:07.015 "zoned": false, 00:06:07.015 "supported_io_types": { 00:06:07.015 "read": true, 00:06:07.015 "write": true, 00:06:07.015 "unmap": true, 00:06:07.015 "write_zeroes": true, 00:06:07.015 "flush": true, 00:06:07.015 "reset": true, 00:06:07.015 "compare": false, 00:06:07.015 "compare_and_write": false, 00:06:07.015 "abort": true, 00:06:07.015 "nvme_admin": false, 00:06:07.015 "nvme_io": false 00:06:07.015 }, 00:06:07.015 "memory_domains": [ 00:06:07.015 { 00:06:07.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.015 "dma_device_type": 2 00:06:07.015 } 00:06:07.015 ], 00:06:07.015 "driver_specific": {} 00:06:07.015 } 00:06:07.015 ]' 00:06:07.015 17:09:03 -- rpc/rpc.sh@17 -- # jq length 00:06:07.274 17:09:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:07.274 17:09:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:07.274 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.274 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.274 [2024-12-14 17:09:03.735205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:07.274 [2024-12-14 17:09:03.735236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:07.274 [2024-12-14 17:09:03.735254] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x153fa20 00:06:07.274 [2024-12-14 17:09:03.735263] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:07.274 [2024-12-14 17:09:03.736154] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:07.274 [2024-12-14 17:09:03.736177] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:07.274 Passthru0 00:06:07.274 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.274 17:09:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:07.274 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.274 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.274 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.274 17:09:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:07.274 { 00:06:07.274 "name": "Malloc2", 00:06:07.274 "aliases": [ 00:06:07.274 "4676c5af-e39a-4cb6-8bf1-6b3094caa951" 00:06:07.274 ], 00:06:07.274 "product_name": "Malloc disk", 00:06:07.274 "block_size": 512, 00:06:07.274 "num_blocks": 16384, 00:06:07.274 "uuid": "4676c5af-e39a-4cb6-8bf1-6b3094caa951", 00:06:07.274 "assigned_rate_limits": { 00:06:07.274 "rw_ios_per_sec": 0, 00:06:07.274 "rw_mbytes_per_sec": 0, 00:06:07.274 "r_mbytes_per_sec": 0, 00:06:07.274 "w_mbytes_per_sec": 0 00:06:07.274 }, 00:06:07.274 "claimed": true, 00:06:07.274 "claim_type": "exclusive_write", 00:06:07.274 "zoned": false, 00:06:07.274 "supported_io_types": { 00:06:07.274 "read": true, 00:06:07.274 "write": true, 00:06:07.274 "unmap": true, 00:06:07.274 "write_zeroes": true, 00:06:07.274 "flush": true, 00:06:07.274 "reset": true, 00:06:07.274 "compare": false, 00:06:07.274 "compare_and_write": false, 00:06:07.274 "abort": true, 00:06:07.274 "nvme_admin": false, 00:06:07.274 "nvme_io": false 00:06:07.274 }, 00:06:07.274 "memory_domains": [ 00:06:07.274 { 00:06:07.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.274 "dma_device_type": 2 00:06:07.274 } 00:06:07.274 ], 00:06:07.274 "driver_specific": {} 00:06:07.274 }, 00:06:07.274 { 00:06:07.274 "name": "Passthru0", 00:06:07.274 "aliases": [ 00:06:07.274 "648cab3e-3e8d-57cf-9928-535753e579f3" 00:06:07.274 ], 00:06:07.274 "product_name": "passthru", 00:06:07.274 "block_size": 512, 00:06:07.274 "num_blocks": 16384, 00:06:07.274 "uuid": "648cab3e-3e8d-57cf-9928-535753e579f3", 00:06:07.274 "assigned_rate_limits": { 00:06:07.274 "rw_ios_per_sec": 0, 00:06:07.274 "rw_mbytes_per_sec": 0, 00:06:07.274 "r_mbytes_per_sec": 0, 00:06:07.274 "w_mbytes_per_sec": 0 00:06:07.274 }, 00:06:07.274 "claimed": false, 00:06:07.274 "zoned": false, 00:06:07.274 "supported_io_types": { 00:06:07.274 "read": true, 00:06:07.274 "write": true, 00:06:07.274 "unmap": true, 00:06:07.274 "write_zeroes": true, 00:06:07.274 "flush": true, 00:06:07.274 "reset": true, 00:06:07.274 "compare": false, 00:06:07.274 "compare_and_write": false, 00:06:07.274 "abort": true, 00:06:07.274 "nvme_admin": false, 00:06:07.274 "nvme_io": false 00:06:07.274 }, 00:06:07.274 "memory_domains": [ 00:06:07.274 { 00:06:07.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.274 "dma_device_type": 2 00:06:07.274 } 00:06:07.274 ], 00:06:07.274 "driver_specific": { 00:06:07.274 "passthru": { 00:06:07.274 "name": "Passthru0", 00:06:07.274 "base_bdev_name": "Malloc2" 00:06:07.274 } 00:06:07.274 } 00:06:07.274 } 00:06:07.274 ]' 00:06:07.274 17:09:03 -- rpc/rpc.sh@21 -- # jq length 00:06:07.274 17:09:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:07.274 17:09:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:07.274 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.274 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.274 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.274 17:09:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:07.274 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.274 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.274 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.274 17:09:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:07.274 17:09:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.274 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.274 17:09:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.274 17:09:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:07.274 17:09:03 -- rpc/rpc.sh@26 -- # jq length 00:06:07.274 17:09:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:07.274 00:06:07.274 real 0m0.273s 00:06:07.274 user 0m0.164s 00:06:07.274 sys 0m0.046s 00:06:07.274 17:09:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.274 17:09:03 -- common/autotest_common.sh@10 -- # set +x 00:06:07.274 ************************************ 00:06:07.274 END TEST rpc_daemon_integrity 00:06:07.274 ************************************ 00:06:07.274 17:09:03 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:07.274 17:09:03 -- rpc/rpc.sh@84 -- # killprocess 1179267 00:06:07.274 17:09:03 -- common/autotest_common.sh@936 -- # '[' -z 1179267 ']' 00:06:07.274 17:09:03 -- common/autotest_common.sh@940 -- # kill -0 1179267 00:06:07.274 17:09:03 -- common/autotest_common.sh@941 -- # uname 00:06:07.275 17:09:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.275 17:09:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1179267 00:06:07.534 17:09:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.534 17:09:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.534 17:09:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1179267' 00:06:07.534 killing process with pid 1179267 00:06:07.534 17:09:03 -- common/autotest_common.sh@955 -- # kill 1179267 00:06:07.534 17:09:03 -- common/autotest_common.sh@960 -- # wait 1179267 00:06:07.793 00:06:07.793 real 0m2.498s 00:06:07.793 user 0m3.066s 00:06:07.793 sys 0m0.799s 00:06:07.793 17:09:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.793 17:09:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.793 ************************************ 00:06:07.793 END TEST rpc 00:06:07.793 ************************************ 00:06:07.793 17:09:04 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:07.793 17:09:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.793 17:09:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.793 17:09:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.793 ************************************ 00:06:07.793 START TEST rpc_client 00:06:07.793 ************************************ 00:06:07.793 17:09:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:07.793 * Looking for test storage... 00:06:07.793 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:06:07.793 17:09:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:07.793 17:09:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:07.793 17:09:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:08.052 17:09:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:08.052 17:09:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:08.052 17:09:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:08.052 17:09:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:08.052 17:09:04 -- scripts/common.sh@335 -- # IFS=.-: 00:06:08.052 17:09:04 -- scripts/common.sh@335 -- # read -ra ver1 00:06:08.052 17:09:04 -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.052 17:09:04 -- scripts/common.sh@336 -- # read -ra ver2 00:06:08.052 17:09:04 -- scripts/common.sh@337 -- # local 'op=<' 00:06:08.052 17:09:04 -- scripts/common.sh@339 -- # ver1_l=2 00:06:08.052 17:09:04 -- scripts/common.sh@340 -- # ver2_l=1 00:06:08.052 17:09:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:08.052 17:09:04 -- scripts/common.sh@343 -- # case "$op" in 00:06:08.052 17:09:04 -- scripts/common.sh@344 -- # : 1 00:06:08.052 17:09:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:08.052 17:09:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.052 17:09:04 -- scripts/common.sh@364 -- # decimal 1 00:06:08.052 17:09:04 -- scripts/common.sh@352 -- # local d=1 00:06:08.052 17:09:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.052 17:09:04 -- scripts/common.sh@354 -- # echo 1 00:06:08.052 17:09:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:08.052 17:09:04 -- scripts/common.sh@365 -- # decimal 2 00:06:08.052 17:09:04 -- scripts/common.sh@352 -- # local d=2 00:06:08.052 17:09:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.052 17:09:04 -- scripts/common.sh@354 -- # echo 2 00:06:08.052 17:09:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:08.052 17:09:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:08.052 17:09:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:08.052 17:09:04 -- scripts/common.sh@367 -- # return 0 00:06:08.052 17:09:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.052 17:09:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:08.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.052 --rc genhtml_branch_coverage=1 00:06:08.052 --rc genhtml_function_coverage=1 00:06:08.052 --rc genhtml_legend=1 00:06:08.052 --rc geninfo_all_blocks=1 00:06:08.052 --rc geninfo_unexecuted_blocks=1 00:06:08.052 00:06:08.052 ' 00:06:08.052 17:09:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:08.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.052 --rc genhtml_branch_coverage=1 00:06:08.052 --rc genhtml_function_coverage=1 00:06:08.052 --rc genhtml_legend=1 00:06:08.052 --rc geninfo_all_blocks=1 00:06:08.052 --rc geninfo_unexecuted_blocks=1 00:06:08.052 00:06:08.052 ' 00:06:08.052 17:09:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:08.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.052 --rc genhtml_branch_coverage=1 00:06:08.052 --rc genhtml_function_coverage=1 00:06:08.052 --rc genhtml_legend=1 00:06:08.052 --rc geninfo_all_blocks=1 00:06:08.052 --rc geninfo_unexecuted_blocks=1 00:06:08.052 00:06:08.052 ' 00:06:08.052 17:09:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:08.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.052 --rc genhtml_branch_coverage=1 00:06:08.052 --rc genhtml_function_coverage=1 00:06:08.052 --rc genhtml_legend=1 00:06:08.052 --rc geninfo_all_blocks=1 00:06:08.052 --rc geninfo_unexecuted_blocks=1 00:06:08.052 00:06:08.052 ' 00:06:08.052 17:09:04 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:08.052 OK 00:06:08.052 17:09:04 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:08.052 00:06:08.052 real 0m0.212s 00:06:08.052 user 0m0.113s 00:06:08.052 sys 0m0.117s 00:06:08.052 17:09:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.052 17:09:04 -- common/autotest_common.sh@10 -- # set +x 00:06:08.052 ************************************ 00:06:08.052 END TEST rpc_client 00:06:08.052 ************************************ 00:06:08.052 17:09:04 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:08.052 17:09:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.052 17:09:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.052 17:09:04 -- common/autotest_common.sh@10 -- # set +x 00:06:08.052 ************************************ 00:06:08.052 START TEST json_config 00:06:08.052 ************************************ 00:06:08.052 17:09:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:08.052 17:09:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:08.052 17:09:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:08.052 17:09:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:08.052 17:09:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:08.052 17:09:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:08.052 17:09:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:08.052 17:09:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:08.052 17:09:04 -- scripts/common.sh@335 -- # IFS=.-: 00:06:08.052 17:09:04 -- scripts/common.sh@335 -- # read -ra ver1 00:06:08.052 17:09:04 -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.052 17:09:04 -- scripts/common.sh@336 -- # read -ra ver2 00:06:08.052 17:09:04 -- scripts/common.sh@337 -- # local 'op=<' 00:06:08.052 17:09:04 -- scripts/common.sh@339 -- # ver1_l=2 00:06:08.052 17:09:04 -- scripts/common.sh@340 -- # ver2_l=1 00:06:08.052 17:09:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:08.052 17:09:04 -- scripts/common.sh@343 -- # case "$op" in 00:06:08.052 17:09:04 -- scripts/common.sh@344 -- # : 1 00:06:08.052 17:09:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:08.052 17:09:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.312 17:09:04 -- scripts/common.sh@364 -- # decimal 1 00:06:08.312 17:09:04 -- scripts/common.sh@352 -- # local d=1 00:06:08.312 17:09:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.312 17:09:04 -- scripts/common.sh@354 -- # echo 1 00:06:08.312 17:09:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:08.312 17:09:04 -- scripts/common.sh@365 -- # decimal 2 00:06:08.312 17:09:04 -- scripts/common.sh@352 -- # local d=2 00:06:08.312 17:09:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.312 17:09:04 -- scripts/common.sh@354 -- # echo 2 00:06:08.312 17:09:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:08.312 17:09:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:08.312 17:09:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:08.312 17:09:04 -- scripts/common.sh@367 -- # return 0 00:06:08.312 17:09:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.312 17:09:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:08.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.312 --rc genhtml_branch_coverage=1 00:06:08.312 --rc genhtml_function_coverage=1 00:06:08.312 --rc genhtml_legend=1 00:06:08.312 --rc geninfo_all_blocks=1 00:06:08.312 --rc geninfo_unexecuted_blocks=1 00:06:08.312 00:06:08.312 ' 00:06:08.312 17:09:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:08.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.312 --rc genhtml_branch_coverage=1 00:06:08.312 --rc genhtml_function_coverage=1 00:06:08.312 --rc genhtml_legend=1 00:06:08.312 --rc geninfo_all_blocks=1 00:06:08.312 --rc geninfo_unexecuted_blocks=1 00:06:08.312 00:06:08.312 ' 00:06:08.312 17:09:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:08.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.312 --rc genhtml_branch_coverage=1 00:06:08.312 --rc genhtml_function_coverage=1 00:06:08.312 --rc genhtml_legend=1 00:06:08.312 --rc geninfo_all_blocks=1 00:06:08.312 --rc geninfo_unexecuted_blocks=1 00:06:08.312 00:06:08.312 ' 00:06:08.312 17:09:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:08.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.312 --rc genhtml_branch_coverage=1 00:06:08.312 --rc genhtml_function_coverage=1 00:06:08.312 --rc genhtml_legend=1 00:06:08.312 --rc geninfo_all_blocks=1 00:06:08.312 --rc geninfo_unexecuted_blocks=1 00:06:08.312 00:06:08.312 ' 00:06:08.312 17:09:04 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.312 17:09:04 -- nvmf/common.sh@7 -- # uname -s 00:06:08.312 17:09:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.312 17:09:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.312 17:09:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.312 17:09:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.312 17:09:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.312 17:09:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.312 17:09:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.312 17:09:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.312 17:09:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.312 17:09:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.312 17:09:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:08.312 17:09:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:08.312 17:09:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.312 17:09:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.312 17:09:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.312 17:09:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:08.312 17:09:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.312 17:09:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.312 17:09:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.312 17:09:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.312 17:09:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.312 17:09:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.312 17:09:04 -- paths/export.sh@5 -- # export PATH 00:06:08.312 17:09:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.312 17:09:04 -- nvmf/common.sh@46 -- # : 0 00:06:08.312 17:09:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:08.312 17:09:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:08.312 17:09:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:08.313 17:09:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.313 17:09:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.313 17:09:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:08.313 17:09:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:08.313 17:09:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:08.313 17:09:04 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:08.313 17:09:04 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:08.313 17:09:04 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:08.313 17:09:04 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:08.313 17:09:04 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:06:08.313 17:09:04 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:08.313 17:09:04 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:08.313 17:09:04 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:08.313 17:09:04 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:08.313 17:09:04 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:08.313 17:09:04 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:06:08.313 17:09:04 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:08.313 17:09:04 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:08.313 17:09:04 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:08.313 17:09:04 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:08.313 INFO: JSON configuration test init 00:06:08.313 17:09:04 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:08.313 17:09:04 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:08.313 17:09:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.313 17:09:04 -- common/autotest_common.sh@10 -- # set +x 00:06:08.313 17:09:04 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:08.313 17:09:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.313 17:09:04 -- common/autotest_common.sh@10 -- # set +x 00:06:08.313 17:09:04 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:08.313 17:09:04 -- json_config/json_config.sh@98 -- # local app=target 00:06:08.313 17:09:04 -- json_config/json_config.sh@99 -- # shift 00:06:08.313 17:09:04 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:08.313 17:09:04 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:08.313 17:09:04 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:08.313 17:09:04 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:08.313 17:09:04 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:08.313 17:09:04 -- json_config/json_config.sh@111 -- # app_pid[$app]=1180148 00:06:08.313 17:09:04 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:08.313 Waiting for target to run... 00:06:08.313 17:09:04 -- json_config/json_config.sh@114 -- # waitforlisten 1180148 /var/tmp/spdk_tgt.sock 00:06:08.313 17:09:04 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:08.313 17:09:04 -- common/autotest_common.sh@829 -- # '[' -z 1180148 ']' 00:06:08.313 17:09:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.313 17:09:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.313 17:09:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.313 17:09:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.313 17:09:04 -- common/autotest_common.sh@10 -- # set +x 00:06:08.313 [2024-12-14 17:09:04.849838] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:08.313 [2024-12-14 17:09:04.849898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180148 ] 00:06:08.313 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.574 [2024-12-14 17:09:05.155944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.574 [2024-12-14 17:09:05.176824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.574 [2024-12-14 17:09:05.176924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.143 17:09:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.143 17:09:05 -- common/autotest_common.sh@862 -- # return 0 00:06:09.143 17:09:05 -- json_config/json_config.sh@115 -- # echo '' 00:06:09.143 00:06:09.143 17:09:05 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:09.143 17:09:05 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:09.143 17:09:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:09.143 17:09:05 -- common/autotest_common.sh@10 -- # set +x 00:06:09.143 17:09:05 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:09.143 17:09:05 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:09.143 17:09:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.143 17:09:05 -- common/autotest_common.sh@10 -- # set +x 00:06:09.143 17:09:05 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:09.143 17:09:05 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:09.143 17:09:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:12.434 17:09:08 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:12.434 17:09:08 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:12.434 17:09:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.434 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:06:12.434 17:09:08 -- json_config/json_config.sh@48 -- # local ret=0 00:06:12.434 17:09:08 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:12.434 17:09:08 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:12.434 17:09:08 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:12.434 17:09:08 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:12.434 17:09:08 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:12.434 17:09:08 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:12.434 17:09:08 -- json_config/json_config.sh@51 -- # local get_types 00:06:12.434 17:09:08 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:12.434 17:09:08 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:12.434 17:09:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.434 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:06:12.434 17:09:09 -- json_config/json_config.sh@58 -- # return 0 00:06:12.434 17:09:09 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:06:12.434 17:09:09 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:12.434 17:09:09 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:12.434 17:09:09 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:06:12.434 17:09:09 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:06:12.434 17:09:09 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:06:12.434 17:09:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.434 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:06:12.434 17:09:09 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:12.434 17:09:09 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:06:12.434 17:09:09 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:06:12.434 17:09:09 -- json_config/json_config.sh@287 -- # nvmftestinit 00:06:12.434 17:09:09 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:06:12.434 17:09:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:12.434 17:09:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:12.434 17:09:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:12.434 17:09:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:12.434 17:09:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.434 17:09:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:06:12.434 17:09:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.434 17:09:09 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:06:12.434 17:09:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:12.434 17:09:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:12.434 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:06:20.556 17:09:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:20.556 17:09:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:20.556 17:09:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:20.556 17:09:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:20.556 17:09:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:20.556 17:09:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:20.556 17:09:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:20.556 17:09:15 -- nvmf/common.sh@294 -- # net_devs=() 00:06:20.556 17:09:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:20.556 17:09:15 -- nvmf/common.sh@295 -- # e810=() 00:06:20.557 17:09:15 -- nvmf/common.sh@295 -- # local -ga e810 00:06:20.557 17:09:15 -- nvmf/common.sh@296 -- # x722=() 00:06:20.557 17:09:15 -- nvmf/common.sh@296 -- # local -ga x722 00:06:20.557 17:09:15 -- nvmf/common.sh@297 -- # mlx=() 00:06:20.557 17:09:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:20.557 17:09:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:20.557 17:09:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:20.557 17:09:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:20.557 17:09:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:20.557 17:09:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:20.557 17:09:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:20.557 17:09:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:20.557 17:09:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:20.557 17:09:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:20.557 17:09:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:20.557 17:09:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:20.557 17:09:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:20.557 17:09:15 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:06:20.557 17:09:15 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:06:20.557 17:09:15 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:06:20.557 17:09:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:20.557 17:09:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:20.557 17:09:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:20.557 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:20.557 17:09:15 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:06:20.557 17:09:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:20.557 17:09:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:20.557 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:20.557 17:09:15 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:06:20.557 17:09:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:20.557 17:09:15 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:20.557 17:09:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.557 17:09:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:20.557 17:09:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.557 17:09:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:20.557 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:20.557 17:09:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.557 17:09:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:20.557 17:09:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.557 17:09:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:20.557 17:09:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.557 17:09:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:20.557 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:20.557 17:09:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.557 17:09:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:20.557 17:09:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:20.557 17:09:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@408 -- # rdma_device_init 00:06:20.557 17:09:15 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:06:20.557 17:09:15 -- nvmf/common.sh@57 -- # uname 00:06:20.557 17:09:15 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:06:20.557 17:09:15 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:06:20.557 17:09:15 -- nvmf/common.sh@62 -- # modprobe ib_core 00:06:20.557 17:09:15 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:06:20.557 17:09:15 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:06:20.557 17:09:15 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:06:20.557 17:09:15 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:06:20.557 17:09:15 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:06:20.557 17:09:15 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:06:20.557 17:09:15 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:20.557 17:09:15 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:06:20.557 17:09:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:20.557 17:09:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:06:20.557 17:09:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:06:20.557 17:09:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:20.557 17:09:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:06:20.557 17:09:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:20.557 17:09:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:20.557 17:09:15 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:06:20.557 17:09:15 -- nvmf/common.sh@104 -- # continue 2 00:06:20.557 17:09:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:20.557 17:09:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:20.557 17:09:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:20.557 17:09:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:06:20.557 17:09:15 -- nvmf/common.sh@104 -- # continue 2 00:06:20.557 17:09:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:06:20.557 17:09:15 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:06:20.557 17:09:15 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:06:20.557 17:09:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:06:20.557 17:09:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:20.557 17:09:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:20.557 17:09:15 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:06:20.557 17:09:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:06:20.557 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:20.557 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:20.557 altname enp217s0f0np0 00:06:20.557 altname ens818f0np0 00:06:20.557 inet 192.168.100.8/24 scope global mlx_0_0 00:06:20.557 valid_lft forever preferred_lft forever 00:06:20.557 17:09:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:06:20.557 17:09:15 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:06:20.557 17:09:15 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:06:20.557 17:09:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:06:20.557 17:09:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:20.557 17:09:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:20.557 17:09:15 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:06:20.557 17:09:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:06:20.557 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:20.557 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:20.557 altname enp217s0f1np1 00:06:20.557 altname ens818f1np1 00:06:20.557 inet 192.168.100.9/24 scope global mlx_0_1 00:06:20.557 valid_lft forever preferred_lft forever 00:06:20.557 17:09:15 -- nvmf/common.sh@410 -- # return 0 00:06:20.557 17:09:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:20.557 17:09:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:20.557 17:09:15 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:06:20.557 17:09:15 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:06:20.557 17:09:15 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:06:20.557 17:09:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:20.557 17:09:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:06:20.557 17:09:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:06:20.557 17:09:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:20.557 17:09:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:06:20.557 17:09:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:20.557 17:09:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:20.557 17:09:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:20.557 17:09:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:06:20.557 17:09:16 -- nvmf/common.sh@104 -- # continue 2 00:06:20.557 17:09:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:20.557 17:09:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:20.557 17:09:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:20.557 17:09:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:20.557 17:09:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:20.557 17:09:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:06:20.557 17:09:16 -- nvmf/common.sh@104 -- # continue 2 00:06:20.557 17:09:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:06:20.557 17:09:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:06:20.557 17:09:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:06:20.557 17:09:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:06:20.557 17:09:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:20.557 17:09:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:20.557 17:09:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:06:20.557 17:09:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:06:20.557 17:09:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:06:20.557 17:09:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:06:20.557 17:09:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:20.557 17:09:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:20.557 17:09:16 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:06:20.557 192.168.100.9' 00:06:20.557 17:09:16 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:06:20.557 192.168.100.9' 00:06:20.557 17:09:16 -- nvmf/common.sh@445 -- # head -n 1 00:06:20.558 17:09:16 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:20.558 17:09:16 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:06:20.558 192.168.100.9' 00:06:20.558 17:09:16 -- nvmf/common.sh@446 -- # tail -n +2 00:06:20.558 17:09:16 -- nvmf/common.sh@446 -- # head -n 1 00:06:20.558 17:09:16 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:20.558 17:09:16 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:06:20.558 17:09:16 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:20.558 17:09:16 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:06:20.558 17:09:16 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:06:20.558 17:09:16 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:06:20.558 17:09:16 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:06:20.558 17:09:16 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:20.558 17:09:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:20.558 MallocForNvmf0 00:06:20.558 17:09:16 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:20.558 17:09:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:20.558 MallocForNvmf1 00:06:20.558 17:09:16 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:20.558 17:09:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:20.558 [2024-12-14 17:09:16.609995] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:20.558 [2024-12-14 17:09:16.651300] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xde8560/0xdf51c0) succeed. 00:06:20.558 [2024-12-14 17:09:16.666531] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdea700/0xe36860) succeed. 00:06:20.558 17:09:16 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:20.558 17:09:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:20.558 17:09:16 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:20.558 17:09:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:20.558 17:09:17 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:20.558 17:09:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:20.817 17:09:17 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:20.817 17:09:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:20.817 [2024-12-14 17:09:17.432883] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:20.817 17:09:17 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:06:20.817 17:09:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.817 17:09:17 -- common/autotest_common.sh@10 -- # set +x 00:06:21.076 17:09:17 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:21.076 17:09:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.076 17:09:17 -- common/autotest_common.sh@10 -- # set +x 00:06:21.076 17:09:17 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:21.076 17:09:17 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:21.076 17:09:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:21.076 MallocBdevForConfigChangeCheck 00:06:21.076 17:09:17 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:21.076 17:09:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.076 17:09:17 -- common/autotest_common.sh@10 -- # set +x 00:06:21.335 17:09:17 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:21.335 17:09:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.594 17:09:18 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:21.594 INFO: shutting down applications... 00:06:21.594 17:09:18 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:21.594 17:09:18 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:21.594 17:09:18 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:21.594 17:09:18 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:24.130 Calling clear_iscsi_subsystem 00:06:24.130 Calling clear_nvmf_subsystem 00:06:24.130 Calling clear_nbd_subsystem 00:06:24.130 Calling clear_ublk_subsystem 00:06:24.130 Calling clear_vhost_blk_subsystem 00:06:24.130 Calling clear_vhost_scsi_subsystem 00:06:24.130 Calling clear_scheduler_subsystem 00:06:24.130 Calling clear_bdev_subsystem 00:06:24.130 Calling clear_accel_subsystem 00:06:24.130 Calling clear_vmd_subsystem 00:06:24.130 Calling clear_sock_subsystem 00:06:24.130 Calling clear_iobuf_subsystem 00:06:24.130 17:09:20 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:24.130 17:09:20 -- json_config/json_config.sh@396 -- # count=100 00:06:24.130 17:09:20 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:24.130 17:09:20 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.130 17:09:20 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:24.130 17:09:20 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:24.389 17:09:20 -- json_config/json_config.sh@398 -- # break 00:06:24.389 17:09:20 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:24.389 17:09:20 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:24.389 17:09:20 -- json_config/json_config.sh@120 -- # local app=target 00:06:24.389 17:09:20 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:24.389 17:09:20 -- json_config/json_config.sh@124 -- # [[ -n 1180148 ]] 00:06:24.389 17:09:20 -- json_config/json_config.sh@127 -- # kill -SIGINT 1180148 00:06:24.389 17:09:20 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:24.389 17:09:20 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:24.389 17:09:20 -- json_config/json_config.sh@130 -- # kill -0 1180148 00:06:24.389 17:09:20 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:24.980 17:09:21 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:24.980 17:09:21 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:24.980 17:09:21 -- json_config/json_config.sh@130 -- # kill -0 1180148 00:06:24.980 17:09:21 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:24.980 17:09:21 -- json_config/json_config.sh@132 -- # break 00:06:24.980 17:09:21 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:24.980 17:09:21 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:24.980 SPDK target shutdown done 00:06:24.980 17:09:21 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:24.980 INFO: relaunching applications... 00:06:24.980 17:09:21 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.980 17:09:21 -- json_config/json_config.sh@98 -- # local app=target 00:06:24.980 17:09:21 -- json_config/json_config.sh@99 -- # shift 00:06:24.980 17:09:21 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:24.980 17:09:21 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:24.980 17:09:21 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:24.980 17:09:21 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:24.980 17:09:21 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:24.981 17:09:21 -- json_config/json_config.sh@111 -- # app_pid[$app]=1185364 00:06:24.981 17:09:21 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:24.981 Waiting for target to run... 00:06:24.981 17:09:21 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.981 17:09:21 -- json_config/json_config.sh@114 -- # waitforlisten 1185364 /var/tmp/spdk_tgt.sock 00:06:24.981 17:09:21 -- common/autotest_common.sh@829 -- # '[' -z 1185364 ']' 00:06:24.981 17:09:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:24.981 17:09:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.981 17:09:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:24.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:24.981 17:09:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.981 17:09:21 -- common/autotest_common.sh@10 -- # set +x 00:06:24.981 [2024-12-14 17:09:21.468717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:24.981 [2024-12-14 17:09:21.468774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185364 ] 00:06:24.981 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.240 [2024-12-14 17:09:21.915031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.499 [2024-12-14 17:09:21.942596] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.499 [2024-12-14 17:09:21.942710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.789 [2024-12-14 17:09:24.963413] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7f2fb0/0x7b13f0) succeed. 00:06:28.789 [2024-12-14 17:09:24.974590] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7f5150/0x65ef90) succeed. 00:06:28.789 [2024-12-14 17:09:25.022997] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:29.048 17:09:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.048 17:09:25 -- common/autotest_common.sh@862 -- # return 0 00:06:29.048 17:09:25 -- json_config/json_config.sh@115 -- # echo '' 00:06:29.048 00:06:29.048 17:09:25 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:29.048 17:09:25 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:29.048 INFO: Checking if target configuration is the same... 00:06:29.048 17:09:25 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:29.048 17:09:25 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.048 17:09:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:29.048 + '[' 2 -ne 2 ']' 00:06:29.048 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:29.048 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:29.048 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:29.048 +++ basename /dev/fd/62 00:06:29.048 ++ mktemp /tmp/62.XXX 00:06:29.048 + tmp_file_1=/tmp/62.IBN 00:06:29.048 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.048 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:29.048 + tmp_file_2=/tmp/spdk_tgt_config.json.sTw 00:06:29.048 + ret=0 00:06:29.048 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:29.307 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:29.307 + diff -u /tmp/62.IBN /tmp/spdk_tgt_config.json.sTw 00:06:29.307 + echo 'INFO: JSON config files are the same' 00:06:29.307 INFO: JSON config files are the same 00:06:29.307 + rm /tmp/62.IBN /tmp/spdk_tgt_config.json.sTw 00:06:29.566 + exit 0 00:06:29.566 17:09:25 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:29.566 17:09:25 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:29.566 INFO: changing configuration and checking if this can be detected... 00:06:29.566 17:09:25 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:29.566 17:09:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:29.566 17:09:26 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.566 17:09:26 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:29.566 17:09:26 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:29.566 + '[' 2 -ne 2 ']' 00:06:29.566 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:29.566 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:29.566 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:29.566 +++ basename /dev/fd/62 00:06:29.566 ++ mktemp /tmp/62.XXX 00:06:29.566 + tmp_file_1=/tmp/62.JqJ 00:06:29.566 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.566 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:29.566 + tmp_file_2=/tmp/spdk_tgt_config.json.zuL 00:06:29.566 + ret=0 00:06:29.566 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:29.825 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:30.085 + diff -u /tmp/62.JqJ /tmp/spdk_tgt_config.json.zuL 00:06:30.085 + ret=1 00:06:30.085 + echo '=== Start of file: /tmp/62.JqJ ===' 00:06:30.085 + cat /tmp/62.JqJ 00:06:30.085 + echo '=== End of file: /tmp/62.JqJ ===' 00:06:30.085 + echo '' 00:06:30.085 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zuL ===' 00:06:30.085 + cat /tmp/spdk_tgt_config.json.zuL 00:06:30.085 + echo '=== End of file: /tmp/spdk_tgt_config.json.zuL ===' 00:06:30.085 + echo '' 00:06:30.085 + rm /tmp/62.JqJ /tmp/spdk_tgt_config.json.zuL 00:06:30.085 + exit 1 00:06:30.085 17:09:26 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:30.085 INFO: configuration change detected. 00:06:30.085 17:09:26 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:30.085 17:09:26 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:30.085 17:09:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.085 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:06:30.085 17:09:26 -- json_config/json_config.sh@360 -- # local ret=0 00:06:30.085 17:09:26 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:30.085 17:09:26 -- json_config/json_config.sh@370 -- # [[ -n 1185364 ]] 00:06:30.085 17:09:26 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:30.085 17:09:26 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:30.085 17:09:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.085 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:06:30.085 17:09:26 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:30.085 17:09:26 -- json_config/json_config.sh@246 -- # uname -s 00:06:30.085 17:09:26 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:30.085 17:09:26 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:30.085 17:09:26 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:30.085 17:09:26 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:30.085 17:09:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:30.085 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:06:30.085 17:09:26 -- json_config/json_config.sh@376 -- # killprocess 1185364 00:06:30.085 17:09:26 -- common/autotest_common.sh@936 -- # '[' -z 1185364 ']' 00:06:30.085 17:09:26 -- common/autotest_common.sh@940 -- # kill -0 1185364 00:06:30.085 17:09:26 -- common/autotest_common.sh@941 -- # uname 00:06:30.085 17:09:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.085 17:09:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1185364 00:06:30.085 17:09:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:30.085 17:09:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:30.085 17:09:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1185364' 00:06:30.085 killing process with pid 1185364 00:06:30.085 17:09:26 -- common/autotest_common.sh@955 -- # kill 1185364 00:06:30.085 17:09:26 -- common/autotest_common.sh@960 -- # wait 1185364 00:06:32.621 17:09:29 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.621 17:09:29 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:32.621 17:09:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:32.621 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.621 17:09:29 -- json_config/json_config.sh@381 -- # return 0 00:06:32.621 17:09:29 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:32.621 INFO: Success 00:06:32.621 17:09:29 -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:32.621 17:09:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:32.621 17:09:29 -- nvmf/common.sh@116 -- # sync 00:06:32.621 17:09:29 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:06:32.621 17:09:29 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:06:32.621 17:09:29 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:06:32.621 17:09:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:32.621 17:09:29 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:06:32.621 00:06:32.621 real 0m24.667s 00:06:32.621 user 0m27.816s 00:06:32.621 sys 0m7.644s 00:06:32.621 17:09:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.621 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.621 ************************************ 00:06:32.621 END TEST json_config 00:06:32.621 ************************************ 00:06:32.621 17:09:29 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:32.621 17:09:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.621 17:09:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.621 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.881 ************************************ 00:06:32.881 START TEST json_config_extra_key 00:06:32.881 ************************************ 00:06:32.881 17:09:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:32.881 17:09:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:32.881 17:09:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:32.881 17:09:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:32.881 17:09:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:32.881 17:09:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:32.881 17:09:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:32.881 17:09:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:32.881 17:09:29 -- scripts/common.sh@335 -- # IFS=.-: 00:06:32.881 17:09:29 -- scripts/common.sh@335 -- # read -ra ver1 00:06:32.881 17:09:29 -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.881 17:09:29 -- scripts/common.sh@336 -- # read -ra ver2 00:06:32.881 17:09:29 -- scripts/common.sh@337 -- # local 'op=<' 00:06:32.881 17:09:29 -- scripts/common.sh@339 -- # ver1_l=2 00:06:32.881 17:09:29 -- scripts/common.sh@340 -- # ver2_l=1 00:06:32.881 17:09:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:32.881 17:09:29 -- scripts/common.sh@343 -- # case "$op" in 00:06:32.881 17:09:29 -- scripts/common.sh@344 -- # : 1 00:06:32.881 17:09:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:32.881 17:09:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.881 17:09:29 -- scripts/common.sh@364 -- # decimal 1 00:06:32.881 17:09:29 -- scripts/common.sh@352 -- # local d=1 00:06:32.881 17:09:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.881 17:09:29 -- scripts/common.sh@354 -- # echo 1 00:06:32.881 17:09:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:32.881 17:09:29 -- scripts/common.sh@365 -- # decimal 2 00:06:32.881 17:09:29 -- scripts/common.sh@352 -- # local d=2 00:06:32.881 17:09:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.881 17:09:29 -- scripts/common.sh@354 -- # echo 2 00:06:32.881 17:09:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:32.881 17:09:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:32.881 17:09:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:32.881 17:09:29 -- scripts/common.sh@367 -- # return 0 00:06:32.881 17:09:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.881 17:09:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:32.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.881 --rc genhtml_branch_coverage=1 00:06:32.881 --rc genhtml_function_coverage=1 00:06:32.881 --rc genhtml_legend=1 00:06:32.881 --rc geninfo_all_blocks=1 00:06:32.881 --rc geninfo_unexecuted_blocks=1 00:06:32.881 00:06:32.881 ' 00:06:32.881 17:09:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:32.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.881 --rc genhtml_branch_coverage=1 00:06:32.881 --rc genhtml_function_coverage=1 00:06:32.881 --rc genhtml_legend=1 00:06:32.881 --rc geninfo_all_blocks=1 00:06:32.881 --rc geninfo_unexecuted_blocks=1 00:06:32.881 00:06:32.881 ' 00:06:32.881 17:09:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:32.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.881 --rc genhtml_branch_coverage=1 00:06:32.881 --rc genhtml_function_coverage=1 00:06:32.881 --rc genhtml_legend=1 00:06:32.881 --rc geninfo_all_blocks=1 00:06:32.881 --rc geninfo_unexecuted_blocks=1 00:06:32.881 00:06:32.881 ' 00:06:32.881 17:09:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:32.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.881 --rc genhtml_branch_coverage=1 00:06:32.881 --rc genhtml_function_coverage=1 00:06:32.881 --rc genhtml_legend=1 00:06:32.881 --rc geninfo_all_blocks=1 00:06:32.881 --rc geninfo_unexecuted_blocks=1 00:06:32.881 00:06:32.881 ' 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.881 17:09:29 -- nvmf/common.sh@7 -- # uname -s 00:06:32.881 17:09:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.881 17:09:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.881 17:09:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.881 17:09:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.881 17:09:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.881 17:09:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.881 17:09:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.881 17:09:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.881 17:09:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.881 17:09:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.881 17:09:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:32.881 17:09:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:32.881 17:09:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.881 17:09:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.881 17:09:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:32.881 17:09:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:32.881 17:09:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.881 17:09:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.881 17:09:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.881 17:09:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.881 17:09:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.881 17:09:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.881 17:09:29 -- paths/export.sh@5 -- # export PATH 00:06:32.881 17:09:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.881 17:09:29 -- nvmf/common.sh@46 -- # : 0 00:06:32.881 17:09:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:32.881 17:09:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:32.881 17:09:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:32.881 17:09:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.881 17:09:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.881 17:09:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:32.881 17:09:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:32.881 17:09:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:32.881 INFO: launching applications... 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:32.881 17:09:29 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1186923 00:06:32.882 17:09:29 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:32.882 Waiting for target to run... 00:06:32.882 17:09:29 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1186923 /var/tmp/spdk_tgt.sock 00:06:32.882 17:09:29 -- common/autotest_common.sh@829 -- # '[' -z 1186923 ']' 00:06:32.882 17:09:29 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:32.882 17:09:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:32.882 17:09:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.882 17:09:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:32.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:32.882 17:09:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.882 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.882 [2024-12-14 17:09:29.556049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:32.882 [2024-12-14 17:09:29.556101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186923 ] 00:06:33.141 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.400 [2024-12-14 17:09:30.014245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.400 [2024-12-14 17:09:30.042213] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.400 [2024-12-14 17:09:30.042328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.967 17:09:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.967 17:09:30 -- common/autotest_common.sh@862 -- # return 0 00:06:33.967 17:09:30 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:33.967 00:06:33.967 17:09:30 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:33.967 INFO: shutting down applications... 00:06:33.967 17:09:30 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:33.967 17:09:30 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:33.967 17:09:30 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:33.967 17:09:30 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1186923 ]] 00:06:33.967 17:09:30 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1186923 00:06:33.967 17:09:30 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:33.967 17:09:30 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:33.967 17:09:30 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1186923 00:06:33.967 17:09:30 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:34.227 17:09:30 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:34.227 17:09:30 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:34.227 17:09:30 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1186923 00:06:34.227 17:09:30 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:34.227 17:09:30 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:34.227 17:09:30 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:34.227 17:09:30 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:34.227 SPDK target shutdown done 00:06:34.227 17:09:30 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:34.227 Success 00:06:34.227 00:06:34.227 real 0m1.568s 00:06:34.227 user 0m1.136s 00:06:34.227 sys 0m0.594s 00:06:34.227 17:09:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.227 17:09:30 -- common/autotest_common.sh@10 -- # set +x 00:06:34.227 ************************************ 00:06:34.227 END TEST json_config_extra_key 00:06:34.227 ************************************ 00:06:34.487 17:09:30 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:34.487 17:09:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.487 17:09:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.487 17:09:30 -- common/autotest_common.sh@10 -- # set +x 00:06:34.487 ************************************ 00:06:34.487 START TEST alias_rpc 00:06:34.487 ************************************ 00:06:34.487 17:09:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:34.487 * Looking for test storage... 00:06:34.487 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:34.487 17:09:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:34.487 17:09:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:34.487 17:09:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:34.487 17:09:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:34.487 17:09:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:34.487 17:09:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:34.487 17:09:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:34.487 17:09:31 -- scripts/common.sh@335 -- # IFS=.-: 00:06:34.487 17:09:31 -- scripts/common.sh@335 -- # read -ra ver1 00:06:34.487 17:09:31 -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.487 17:09:31 -- scripts/common.sh@336 -- # read -ra ver2 00:06:34.487 17:09:31 -- scripts/common.sh@337 -- # local 'op=<' 00:06:34.487 17:09:31 -- scripts/common.sh@339 -- # ver1_l=2 00:06:34.487 17:09:31 -- scripts/common.sh@340 -- # ver2_l=1 00:06:34.487 17:09:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:34.487 17:09:31 -- scripts/common.sh@343 -- # case "$op" in 00:06:34.487 17:09:31 -- scripts/common.sh@344 -- # : 1 00:06:34.487 17:09:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:34.487 17:09:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.487 17:09:31 -- scripts/common.sh@364 -- # decimal 1 00:06:34.487 17:09:31 -- scripts/common.sh@352 -- # local d=1 00:06:34.487 17:09:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.487 17:09:31 -- scripts/common.sh@354 -- # echo 1 00:06:34.487 17:09:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:34.487 17:09:31 -- scripts/common.sh@365 -- # decimal 2 00:06:34.487 17:09:31 -- scripts/common.sh@352 -- # local d=2 00:06:34.487 17:09:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.487 17:09:31 -- scripts/common.sh@354 -- # echo 2 00:06:34.487 17:09:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:34.487 17:09:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:34.487 17:09:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:34.487 17:09:31 -- scripts/common.sh@367 -- # return 0 00:06:34.487 17:09:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.487 17:09:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:34.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.487 --rc genhtml_branch_coverage=1 00:06:34.487 --rc genhtml_function_coverage=1 00:06:34.487 --rc genhtml_legend=1 00:06:34.487 --rc geninfo_all_blocks=1 00:06:34.487 --rc geninfo_unexecuted_blocks=1 00:06:34.487 00:06:34.487 ' 00:06:34.487 17:09:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:34.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.487 --rc genhtml_branch_coverage=1 00:06:34.487 --rc genhtml_function_coverage=1 00:06:34.487 --rc genhtml_legend=1 00:06:34.487 --rc geninfo_all_blocks=1 00:06:34.487 --rc geninfo_unexecuted_blocks=1 00:06:34.487 00:06:34.487 ' 00:06:34.487 17:09:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:34.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.487 --rc genhtml_branch_coverage=1 00:06:34.487 --rc genhtml_function_coverage=1 00:06:34.487 --rc genhtml_legend=1 00:06:34.487 --rc geninfo_all_blocks=1 00:06:34.487 --rc geninfo_unexecuted_blocks=1 00:06:34.487 00:06:34.487 ' 00:06:34.487 17:09:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:34.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.487 --rc genhtml_branch_coverage=1 00:06:34.487 --rc genhtml_function_coverage=1 00:06:34.487 --rc genhtml_legend=1 00:06:34.487 --rc geninfo_all_blocks=1 00:06:34.487 --rc geninfo_unexecuted_blocks=1 00:06:34.487 00:06:34.487 ' 00:06:34.487 17:09:31 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:34.487 17:09:31 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1187267 00:06:34.487 17:09:31 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1187267 00:06:34.487 17:09:31 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:34.487 17:09:31 -- common/autotest_common.sh@829 -- # '[' -z 1187267 ']' 00:06:34.487 17:09:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.487 17:09:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.487 17:09:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.487 17:09:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.487 17:09:31 -- common/autotest_common.sh@10 -- # set +x 00:06:34.487 [2024-12-14 17:09:31.165152] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:34.487 [2024-12-14 17:09:31.165203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187267 ] 00:06:34.805 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.805 [2024-12-14 17:09:31.249844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.805 [2024-12-14 17:09:31.286376] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.805 [2024-12-14 17:09:31.286491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.424 17:09:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.424 17:09:31 -- common/autotest_common.sh@862 -- # return 0 00:06:35.424 17:09:31 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:35.683 17:09:32 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1187267 00:06:35.683 17:09:32 -- common/autotest_common.sh@936 -- # '[' -z 1187267 ']' 00:06:35.683 17:09:32 -- common/autotest_common.sh@940 -- # kill -0 1187267 00:06:35.683 17:09:32 -- common/autotest_common.sh@941 -- # uname 00:06:35.683 17:09:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:35.683 17:09:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1187267 00:06:35.683 17:09:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:35.683 17:09:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:35.683 17:09:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1187267' 00:06:35.683 killing process with pid 1187267 00:06:35.683 17:09:32 -- common/autotest_common.sh@955 -- # kill 1187267 00:06:35.683 17:09:32 -- common/autotest_common.sh@960 -- # wait 1187267 00:06:35.942 00:06:35.942 real 0m1.602s 00:06:35.942 user 0m1.675s 00:06:35.942 sys 0m0.504s 00:06:35.942 17:09:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.942 17:09:32 -- common/autotest_common.sh@10 -- # set +x 00:06:35.942 ************************************ 00:06:35.942 END TEST alias_rpc 00:06:35.942 ************************************ 00:06:35.942 17:09:32 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:06:35.942 17:09:32 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:35.942 17:09:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.942 17:09:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.942 17:09:32 -- common/autotest_common.sh@10 -- # set +x 00:06:35.942 ************************************ 00:06:35.942 START TEST spdkcli_tcp 00:06:35.942 ************************************ 00:06:35.942 17:09:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:36.201 * Looking for test storage... 00:06:36.201 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:36.201 17:09:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:36.201 17:09:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:36.201 17:09:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:36.201 17:09:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:36.201 17:09:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:36.201 17:09:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:36.201 17:09:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:36.201 17:09:32 -- scripts/common.sh@335 -- # IFS=.-: 00:06:36.201 17:09:32 -- scripts/common.sh@335 -- # read -ra ver1 00:06:36.201 17:09:32 -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.201 17:09:32 -- scripts/common.sh@336 -- # read -ra ver2 00:06:36.201 17:09:32 -- scripts/common.sh@337 -- # local 'op=<' 00:06:36.201 17:09:32 -- scripts/common.sh@339 -- # ver1_l=2 00:06:36.201 17:09:32 -- scripts/common.sh@340 -- # ver2_l=1 00:06:36.201 17:09:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:36.201 17:09:32 -- scripts/common.sh@343 -- # case "$op" in 00:06:36.201 17:09:32 -- scripts/common.sh@344 -- # : 1 00:06:36.201 17:09:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:36.201 17:09:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.201 17:09:32 -- scripts/common.sh@364 -- # decimal 1 00:06:36.201 17:09:32 -- scripts/common.sh@352 -- # local d=1 00:06:36.201 17:09:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.201 17:09:32 -- scripts/common.sh@354 -- # echo 1 00:06:36.201 17:09:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:36.201 17:09:32 -- scripts/common.sh@365 -- # decimal 2 00:06:36.201 17:09:32 -- scripts/common.sh@352 -- # local d=2 00:06:36.201 17:09:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.201 17:09:32 -- scripts/common.sh@354 -- # echo 2 00:06:36.201 17:09:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:36.201 17:09:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:36.201 17:09:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:36.201 17:09:32 -- scripts/common.sh@367 -- # return 0 00:06:36.201 17:09:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.201 17:09:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:36.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.201 --rc genhtml_branch_coverage=1 00:06:36.201 --rc genhtml_function_coverage=1 00:06:36.201 --rc genhtml_legend=1 00:06:36.201 --rc geninfo_all_blocks=1 00:06:36.201 --rc geninfo_unexecuted_blocks=1 00:06:36.201 00:06:36.201 ' 00:06:36.201 17:09:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:36.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.201 --rc genhtml_branch_coverage=1 00:06:36.201 --rc genhtml_function_coverage=1 00:06:36.201 --rc genhtml_legend=1 00:06:36.201 --rc geninfo_all_blocks=1 00:06:36.201 --rc geninfo_unexecuted_blocks=1 00:06:36.201 00:06:36.201 ' 00:06:36.201 17:09:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:36.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.201 --rc genhtml_branch_coverage=1 00:06:36.201 --rc genhtml_function_coverage=1 00:06:36.201 --rc genhtml_legend=1 00:06:36.201 --rc geninfo_all_blocks=1 00:06:36.201 --rc geninfo_unexecuted_blocks=1 00:06:36.201 00:06:36.201 ' 00:06:36.201 17:09:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:36.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.201 --rc genhtml_branch_coverage=1 00:06:36.201 --rc genhtml_function_coverage=1 00:06:36.201 --rc genhtml_legend=1 00:06:36.201 --rc geninfo_all_blocks=1 00:06:36.201 --rc geninfo_unexecuted_blocks=1 00:06:36.201 00:06:36.201 ' 00:06:36.201 17:09:32 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:36.201 17:09:32 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:36.201 17:09:32 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:36.201 17:09:32 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:36.201 17:09:32 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:36.201 17:09:32 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:36.201 17:09:32 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:36.201 17:09:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.201 17:09:32 -- common/autotest_common.sh@10 -- # set +x 00:06:36.201 17:09:32 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1187660 00:06:36.201 17:09:32 -- spdkcli/tcp.sh@27 -- # waitforlisten 1187660 00:06:36.201 17:09:32 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:36.201 17:09:32 -- common/autotest_common.sh@829 -- # '[' -z 1187660 ']' 00:06:36.201 17:09:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.201 17:09:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.201 17:09:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.201 17:09:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.201 17:09:32 -- common/autotest_common.sh@10 -- # set +x 00:06:36.201 [2024-12-14 17:09:32.832641] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:36.201 [2024-12-14 17:09:32.832699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187660 ] 00:06:36.201 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.459 [2024-12-14 17:09:32.918966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.459 [2024-12-14 17:09:32.957562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.460 [2024-12-14 17:09:32.957718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.460 [2024-12-14 17:09:32.957721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.027 17:09:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.027 17:09:33 -- common/autotest_common.sh@862 -- # return 0 00:06:37.027 17:09:33 -- spdkcli/tcp.sh@31 -- # socat_pid=1187796 00:06:37.027 17:09:33 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:37.027 17:09:33 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:37.286 [ 00:06:37.286 "bdev_malloc_delete", 00:06:37.286 "bdev_malloc_create", 00:06:37.286 "bdev_null_resize", 00:06:37.286 "bdev_null_delete", 00:06:37.286 "bdev_null_create", 00:06:37.286 "bdev_nvme_cuse_unregister", 00:06:37.286 "bdev_nvme_cuse_register", 00:06:37.286 "bdev_opal_new_user", 00:06:37.286 "bdev_opal_set_lock_state", 00:06:37.286 "bdev_opal_delete", 00:06:37.286 "bdev_opal_get_info", 00:06:37.286 "bdev_opal_create", 00:06:37.286 "bdev_nvme_opal_revert", 00:06:37.286 "bdev_nvme_opal_init", 00:06:37.286 "bdev_nvme_send_cmd", 00:06:37.286 "bdev_nvme_get_path_iostat", 00:06:37.286 "bdev_nvme_get_mdns_discovery_info", 00:06:37.286 "bdev_nvme_stop_mdns_discovery", 00:06:37.286 "bdev_nvme_start_mdns_discovery", 00:06:37.286 "bdev_nvme_set_multipath_policy", 00:06:37.286 "bdev_nvme_set_preferred_path", 00:06:37.286 "bdev_nvme_get_io_paths", 00:06:37.286 "bdev_nvme_remove_error_injection", 00:06:37.286 "bdev_nvme_add_error_injection", 00:06:37.286 "bdev_nvme_get_discovery_info", 00:06:37.286 "bdev_nvme_stop_discovery", 00:06:37.286 "bdev_nvme_start_discovery", 00:06:37.286 "bdev_nvme_get_controller_health_info", 00:06:37.286 "bdev_nvme_disable_controller", 00:06:37.286 "bdev_nvme_enable_controller", 00:06:37.286 "bdev_nvme_reset_controller", 00:06:37.286 "bdev_nvme_get_transport_statistics", 00:06:37.286 "bdev_nvme_apply_firmware", 00:06:37.286 "bdev_nvme_detach_controller", 00:06:37.286 "bdev_nvme_get_controllers", 00:06:37.286 "bdev_nvme_attach_controller", 00:06:37.286 "bdev_nvme_set_hotplug", 00:06:37.286 "bdev_nvme_set_options", 00:06:37.286 "bdev_passthru_delete", 00:06:37.286 "bdev_passthru_create", 00:06:37.286 "bdev_lvol_grow_lvstore", 00:06:37.286 "bdev_lvol_get_lvols", 00:06:37.286 "bdev_lvol_get_lvstores", 00:06:37.286 "bdev_lvol_delete", 00:06:37.286 "bdev_lvol_set_read_only", 00:06:37.286 "bdev_lvol_resize", 00:06:37.286 "bdev_lvol_decouple_parent", 00:06:37.286 "bdev_lvol_inflate", 00:06:37.286 "bdev_lvol_rename", 00:06:37.286 "bdev_lvol_clone_bdev", 00:06:37.286 "bdev_lvol_clone", 00:06:37.286 "bdev_lvol_snapshot", 00:06:37.286 "bdev_lvol_create", 00:06:37.286 "bdev_lvol_delete_lvstore", 00:06:37.286 "bdev_lvol_rename_lvstore", 00:06:37.286 "bdev_lvol_create_lvstore", 00:06:37.286 "bdev_raid_set_options", 00:06:37.286 "bdev_raid_remove_base_bdev", 00:06:37.286 "bdev_raid_add_base_bdev", 00:06:37.286 "bdev_raid_delete", 00:06:37.286 "bdev_raid_create", 00:06:37.286 "bdev_raid_get_bdevs", 00:06:37.286 "bdev_error_inject_error", 00:06:37.286 "bdev_error_delete", 00:06:37.286 "bdev_error_create", 00:06:37.286 "bdev_split_delete", 00:06:37.286 "bdev_split_create", 00:06:37.286 "bdev_delay_delete", 00:06:37.286 "bdev_delay_create", 00:06:37.286 "bdev_delay_update_latency", 00:06:37.286 "bdev_zone_block_delete", 00:06:37.286 "bdev_zone_block_create", 00:06:37.286 "blobfs_create", 00:06:37.286 "blobfs_detect", 00:06:37.286 "blobfs_set_cache_size", 00:06:37.286 "bdev_aio_delete", 00:06:37.286 "bdev_aio_rescan", 00:06:37.286 "bdev_aio_create", 00:06:37.286 "bdev_ftl_set_property", 00:06:37.286 "bdev_ftl_get_properties", 00:06:37.286 "bdev_ftl_get_stats", 00:06:37.286 "bdev_ftl_unmap", 00:06:37.286 "bdev_ftl_unload", 00:06:37.286 "bdev_ftl_delete", 00:06:37.286 "bdev_ftl_load", 00:06:37.286 "bdev_ftl_create", 00:06:37.286 "bdev_virtio_attach_controller", 00:06:37.286 "bdev_virtio_scsi_get_devices", 00:06:37.286 "bdev_virtio_detach_controller", 00:06:37.286 "bdev_virtio_blk_set_hotplug", 00:06:37.286 "bdev_iscsi_delete", 00:06:37.286 "bdev_iscsi_create", 00:06:37.286 "bdev_iscsi_set_options", 00:06:37.286 "accel_error_inject_error", 00:06:37.286 "ioat_scan_accel_module", 00:06:37.286 "dsa_scan_accel_module", 00:06:37.286 "iaa_scan_accel_module", 00:06:37.286 "iscsi_set_options", 00:06:37.286 "iscsi_get_auth_groups", 00:06:37.286 "iscsi_auth_group_remove_secret", 00:06:37.286 "iscsi_auth_group_add_secret", 00:06:37.286 "iscsi_delete_auth_group", 00:06:37.286 "iscsi_create_auth_group", 00:06:37.286 "iscsi_set_discovery_auth", 00:06:37.286 "iscsi_get_options", 00:06:37.286 "iscsi_target_node_request_logout", 00:06:37.286 "iscsi_target_node_set_redirect", 00:06:37.286 "iscsi_target_node_set_auth", 00:06:37.286 "iscsi_target_node_add_lun", 00:06:37.286 "iscsi_get_connections", 00:06:37.286 "iscsi_portal_group_set_auth", 00:06:37.286 "iscsi_start_portal_group", 00:06:37.287 "iscsi_delete_portal_group", 00:06:37.287 "iscsi_create_portal_group", 00:06:37.287 "iscsi_get_portal_groups", 00:06:37.287 "iscsi_delete_target_node", 00:06:37.287 "iscsi_target_node_remove_pg_ig_maps", 00:06:37.287 "iscsi_target_node_add_pg_ig_maps", 00:06:37.287 "iscsi_create_target_node", 00:06:37.287 "iscsi_get_target_nodes", 00:06:37.287 "iscsi_delete_initiator_group", 00:06:37.287 "iscsi_initiator_group_remove_initiators", 00:06:37.287 "iscsi_initiator_group_add_initiators", 00:06:37.287 "iscsi_create_initiator_group", 00:06:37.287 "iscsi_get_initiator_groups", 00:06:37.287 "nvmf_set_crdt", 00:06:37.287 "nvmf_set_config", 00:06:37.287 "nvmf_set_max_subsystems", 00:06:37.287 "nvmf_subsystem_get_listeners", 00:06:37.287 "nvmf_subsystem_get_qpairs", 00:06:37.287 "nvmf_subsystem_get_controllers", 00:06:37.287 "nvmf_get_stats", 00:06:37.287 "nvmf_get_transports", 00:06:37.287 "nvmf_create_transport", 00:06:37.287 "nvmf_get_targets", 00:06:37.287 "nvmf_delete_target", 00:06:37.287 "nvmf_create_target", 00:06:37.287 "nvmf_subsystem_allow_any_host", 00:06:37.287 "nvmf_subsystem_remove_host", 00:06:37.287 "nvmf_subsystem_add_host", 00:06:37.287 "nvmf_subsystem_remove_ns", 00:06:37.287 "nvmf_subsystem_add_ns", 00:06:37.287 "nvmf_subsystem_listener_set_ana_state", 00:06:37.287 "nvmf_discovery_get_referrals", 00:06:37.287 "nvmf_discovery_remove_referral", 00:06:37.287 "nvmf_discovery_add_referral", 00:06:37.287 "nvmf_subsystem_remove_listener", 00:06:37.287 "nvmf_subsystem_add_listener", 00:06:37.287 "nvmf_delete_subsystem", 00:06:37.287 "nvmf_create_subsystem", 00:06:37.287 "nvmf_get_subsystems", 00:06:37.287 "env_dpdk_get_mem_stats", 00:06:37.287 "nbd_get_disks", 00:06:37.287 "nbd_stop_disk", 00:06:37.287 "nbd_start_disk", 00:06:37.287 "ublk_recover_disk", 00:06:37.287 "ublk_get_disks", 00:06:37.287 "ublk_stop_disk", 00:06:37.287 "ublk_start_disk", 00:06:37.287 "ublk_destroy_target", 00:06:37.287 "ublk_create_target", 00:06:37.287 "virtio_blk_create_transport", 00:06:37.287 "virtio_blk_get_transports", 00:06:37.287 "vhost_controller_set_coalescing", 00:06:37.287 "vhost_get_controllers", 00:06:37.287 "vhost_delete_controller", 00:06:37.287 "vhost_create_blk_controller", 00:06:37.287 "vhost_scsi_controller_remove_target", 00:06:37.287 "vhost_scsi_controller_add_target", 00:06:37.287 "vhost_start_scsi_controller", 00:06:37.287 "vhost_create_scsi_controller", 00:06:37.287 "thread_set_cpumask", 00:06:37.287 "framework_get_scheduler", 00:06:37.287 "framework_set_scheduler", 00:06:37.287 "framework_get_reactors", 00:06:37.287 "thread_get_io_channels", 00:06:37.287 "thread_get_pollers", 00:06:37.287 "thread_get_stats", 00:06:37.287 "framework_monitor_context_switch", 00:06:37.287 "spdk_kill_instance", 00:06:37.287 "log_enable_timestamps", 00:06:37.287 "log_get_flags", 00:06:37.287 "log_clear_flag", 00:06:37.287 "log_set_flag", 00:06:37.287 "log_get_level", 00:06:37.287 "log_set_level", 00:06:37.287 "log_get_print_level", 00:06:37.287 "log_set_print_level", 00:06:37.287 "framework_enable_cpumask_locks", 00:06:37.287 "framework_disable_cpumask_locks", 00:06:37.287 "framework_wait_init", 00:06:37.287 "framework_start_init", 00:06:37.287 "scsi_get_devices", 00:06:37.287 "bdev_get_histogram", 00:06:37.287 "bdev_enable_histogram", 00:06:37.287 "bdev_set_qos_limit", 00:06:37.287 "bdev_set_qd_sampling_period", 00:06:37.287 "bdev_get_bdevs", 00:06:37.287 "bdev_reset_iostat", 00:06:37.287 "bdev_get_iostat", 00:06:37.287 "bdev_examine", 00:06:37.287 "bdev_wait_for_examine", 00:06:37.287 "bdev_set_options", 00:06:37.287 "notify_get_notifications", 00:06:37.287 "notify_get_types", 00:06:37.287 "accel_get_stats", 00:06:37.287 "accel_set_options", 00:06:37.287 "accel_set_driver", 00:06:37.287 "accel_crypto_key_destroy", 00:06:37.287 "accel_crypto_keys_get", 00:06:37.287 "accel_crypto_key_create", 00:06:37.287 "accel_assign_opc", 00:06:37.287 "accel_get_module_info", 00:06:37.287 "accel_get_opc_assignments", 00:06:37.287 "vmd_rescan", 00:06:37.287 "vmd_remove_device", 00:06:37.287 "vmd_enable", 00:06:37.287 "sock_set_default_impl", 00:06:37.287 "sock_impl_set_options", 00:06:37.287 "sock_impl_get_options", 00:06:37.287 "iobuf_get_stats", 00:06:37.287 "iobuf_set_options", 00:06:37.287 "framework_get_pci_devices", 00:06:37.287 "framework_get_config", 00:06:37.287 "framework_get_subsystems", 00:06:37.287 "trace_get_info", 00:06:37.287 "trace_get_tpoint_group_mask", 00:06:37.287 "trace_disable_tpoint_group", 00:06:37.287 "trace_enable_tpoint_group", 00:06:37.287 "trace_clear_tpoint_mask", 00:06:37.287 "trace_set_tpoint_mask", 00:06:37.287 "spdk_get_version", 00:06:37.287 "rpc_get_methods" 00:06:37.287 ] 00:06:37.287 17:09:33 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:37.287 17:09:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:37.287 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:06:37.287 17:09:33 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:37.287 17:09:33 -- spdkcli/tcp.sh@38 -- # killprocess 1187660 00:06:37.287 17:09:33 -- common/autotest_common.sh@936 -- # '[' -z 1187660 ']' 00:06:37.287 17:09:33 -- common/autotest_common.sh@940 -- # kill -0 1187660 00:06:37.287 17:09:33 -- common/autotest_common.sh@941 -- # uname 00:06:37.287 17:09:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.287 17:09:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1187660 00:06:37.287 17:09:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.287 17:09:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.287 17:09:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1187660' 00:06:37.287 killing process with pid 1187660 00:06:37.287 17:09:33 -- common/autotest_common.sh@955 -- # kill 1187660 00:06:37.287 17:09:33 -- common/autotest_common.sh@960 -- # wait 1187660 00:06:37.855 00:06:37.855 real 0m1.650s 00:06:37.855 user 0m2.945s 00:06:37.855 sys 0m0.547s 00:06:37.855 17:09:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.855 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.855 ************************************ 00:06:37.855 END TEST spdkcli_tcp 00:06:37.855 ************************************ 00:06:37.855 17:09:34 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:37.855 17:09:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:37.855 17:09:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.855 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.855 ************************************ 00:06:37.855 START TEST dpdk_mem_utility 00:06:37.855 ************************************ 00:06:37.855 17:09:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:37.855 * Looking for test storage... 00:06:37.855 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:37.855 17:09:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:37.855 17:09:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:37.855 17:09:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:37.855 17:09:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:37.855 17:09:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:37.855 17:09:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:37.855 17:09:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:37.855 17:09:34 -- scripts/common.sh@335 -- # IFS=.-: 00:06:37.855 17:09:34 -- scripts/common.sh@335 -- # read -ra ver1 00:06:37.855 17:09:34 -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.855 17:09:34 -- scripts/common.sh@336 -- # read -ra ver2 00:06:37.855 17:09:34 -- scripts/common.sh@337 -- # local 'op=<' 00:06:37.855 17:09:34 -- scripts/common.sh@339 -- # ver1_l=2 00:06:37.855 17:09:34 -- scripts/common.sh@340 -- # ver2_l=1 00:06:37.855 17:09:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:37.855 17:09:34 -- scripts/common.sh@343 -- # case "$op" in 00:06:37.855 17:09:34 -- scripts/common.sh@344 -- # : 1 00:06:37.855 17:09:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:37.855 17:09:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.855 17:09:34 -- scripts/common.sh@364 -- # decimal 1 00:06:37.855 17:09:34 -- scripts/common.sh@352 -- # local d=1 00:06:37.855 17:09:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.855 17:09:34 -- scripts/common.sh@354 -- # echo 1 00:06:37.855 17:09:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:37.855 17:09:34 -- scripts/common.sh@365 -- # decimal 2 00:06:37.855 17:09:34 -- scripts/common.sh@352 -- # local d=2 00:06:37.855 17:09:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.855 17:09:34 -- scripts/common.sh@354 -- # echo 2 00:06:37.855 17:09:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:37.855 17:09:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:37.855 17:09:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:37.855 17:09:34 -- scripts/common.sh@367 -- # return 0 00:06:37.855 17:09:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.855 17:09:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.855 --rc genhtml_branch_coverage=1 00:06:37.855 --rc genhtml_function_coverage=1 00:06:37.855 --rc genhtml_legend=1 00:06:37.855 --rc geninfo_all_blocks=1 00:06:37.855 --rc geninfo_unexecuted_blocks=1 00:06:37.855 00:06:37.855 ' 00:06:37.855 17:09:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.855 --rc genhtml_branch_coverage=1 00:06:37.855 --rc genhtml_function_coverage=1 00:06:37.855 --rc genhtml_legend=1 00:06:37.855 --rc geninfo_all_blocks=1 00:06:37.855 --rc geninfo_unexecuted_blocks=1 00:06:37.855 00:06:37.855 ' 00:06:37.855 17:09:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.855 --rc genhtml_branch_coverage=1 00:06:37.855 --rc genhtml_function_coverage=1 00:06:37.855 --rc genhtml_legend=1 00:06:37.855 --rc geninfo_all_blocks=1 00:06:37.855 --rc geninfo_unexecuted_blocks=1 00:06:37.855 00:06:37.855 ' 00:06:37.855 17:09:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.855 --rc genhtml_branch_coverage=1 00:06:37.855 --rc genhtml_function_coverage=1 00:06:37.855 --rc genhtml_legend=1 00:06:37.855 --rc geninfo_all_blocks=1 00:06:37.855 --rc geninfo_unexecuted_blocks=1 00:06:37.855 00:06:37.855 ' 00:06:37.855 17:09:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:37.855 17:09:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1188092 00:06:37.855 17:09:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1188092 00:06:37.855 17:09:34 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.855 17:09:34 -- common/autotest_common.sh@829 -- # '[' -z 1188092 ']' 00:06:37.855 17:09:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.855 17:09:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.855 17:09:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.855 17:09:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.855 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.856 [2024-12-14 17:09:34.517995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:37.856 [2024-12-14 17:09:34.518055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188092 ] 00:06:38.115 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.115 [2024-12-14 17:09:34.600482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.115 [2024-12-14 17:09:34.637371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.115 [2024-12-14 17:09:34.637503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.682 17:09:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.682 17:09:35 -- common/autotest_common.sh@862 -- # return 0 00:06:38.682 17:09:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:38.682 17:09:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:38.682 17:09:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.682 17:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:38.682 { 00:06:38.682 "filename": "/tmp/spdk_mem_dump.txt" 00:06:38.682 } 00:06:38.682 17:09:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.682 17:09:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:38.941 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:38.941 1 heaps totaling size 814.000000 MiB 00:06:38.941 size: 814.000000 MiB heap id: 0 00:06:38.941 end heaps---------- 00:06:38.941 8 mempools totaling size 598.116089 MiB 00:06:38.941 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:38.941 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:38.941 size: 84.521057 MiB name: bdev_io_1188092 00:06:38.941 size: 51.011292 MiB name: evtpool_1188092 00:06:38.941 size: 50.003479 MiB name: msgpool_1188092 00:06:38.941 size: 21.763794 MiB name: PDU_Pool 00:06:38.941 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:38.941 size: 0.026123 MiB name: Session_Pool 00:06:38.941 end mempools------- 00:06:38.941 6 memzones totaling size 4.142822 MiB 00:06:38.941 size: 1.000366 MiB name: RG_ring_0_1188092 00:06:38.941 size: 1.000366 MiB name: RG_ring_1_1188092 00:06:38.941 size: 1.000366 MiB name: RG_ring_4_1188092 00:06:38.941 size: 1.000366 MiB name: RG_ring_5_1188092 00:06:38.941 size: 0.125366 MiB name: RG_ring_2_1188092 00:06:38.941 size: 0.015991 MiB name: RG_ring_3_1188092 00:06:38.941 end memzones------- 00:06:38.941 17:09:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:38.941 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:38.941 list of free elements. size: 12.519348 MiB 00:06:38.941 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:38.941 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:38.941 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:38.941 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:38.941 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:38.941 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:38.941 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:38.941 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:38.941 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:38.941 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:38.941 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:38.941 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:38.941 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:38.941 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:38.941 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:38.941 list of standard malloc elements. size: 199.218079 MiB 00:06:38.941 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:38.941 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:38.941 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:38.941 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:38.941 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:38.941 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:38.941 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:38.941 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:38.941 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:38.941 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:38.941 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:38.941 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:38.941 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:38.941 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:38.941 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:38.941 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:38.941 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:38.941 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:38.941 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:38.941 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:38.941 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:38.941 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:38.941 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:38.941 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:38.941 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:38.941 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:38.941 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:38.941 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:38.941 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:38.942 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:38.942 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:38.942 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:38.942 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:38.942 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:38.942 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:38.942 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:38.942 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:38.942 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:38.942 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:38.942 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:38.942 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:38.942 list of memzone associated elements. size: 602.262573 MiB 00:06:38.942 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:38.942 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:38.942 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:38.942 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:38.942 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:38.942 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1188092_0 00:06:38.942 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:38.942 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1188092_0 00:06:38.942 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:38.942 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1188092_0 00:06:38.942 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:38.942 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:38.942 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:38.942 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:38.942 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:38.942 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1188092 00:06:38.942 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:38.942 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1188092 00:06:38.942 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:38.942 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1188092 00:06:38.942 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:38.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:38.942 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:38.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:38.942 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:38.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:38.942 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:38.942 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:38.942 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:38.942 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1188092 00:06:38.942 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:38.942 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1188092 00:06:38.942 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:38.942 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1188092 00:06:38.942 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:38.942 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1188092 00:06:38.942 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:38.942 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1188092 00:06:38.942 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:38.942 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:38.942 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:38.942 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:38.942 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:38.942 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:38.942 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:38.942 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1188092 00:06:38.942 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:38.942 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:38.942 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:38.942 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:38.942 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:38.942 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1188092 00:06:38.942 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:38.942 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:38.942 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:38.942 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1188092 00:06:38.942 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:38.942 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1188092 00:06:38.942 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:38.942 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:38.942 17:09:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:38.942 17:09:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1188092 00:06:38.942 17:09:35 -- common/autotest_common.sh@936 -- # '[' -z 1188092 ']' 00:06:38.942 17:09:35 -- common/autotest_common.sh@940 -- # kill -0 1188092 00:06:38.942 17:09:35 -- common/autotest_common.sh@941 -- # uname 00:06:38.942 17:09:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:38.942 17:09:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1188092 00:06:38.942 17:09:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:38.942 17:09:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:38.942 17:09:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1188092' 00:06:38.942 killing process with pid 1188092 00:06:38.942 17:09:35 -- common/autotest_common.sh@955 -- # kill 1188092 00:06:38.942 17:09:35 -- common/autotest_common.sh@960 -- # wait 1188092 00:06:39.202 00:06:39.202 real 0m1.510s 00:06:39.202 user 0m1.517s 00:06:39.202 sys 0m0.501s 00:06:39.202 17:09:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.202 17:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:39.202 ************************************ 00:06:39.202 END TEST dpdk_mem_utility 00:06:39.202 ************************************ 00:06:39.202 17:09:35 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:39.202 17:09:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.202 17:09:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.202 17:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:39.202 ************************************ 00:06:39.202 START TEST event 00:06:39.202 ************************************ 00:06:39.202 17:09:35 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:39.462 * Looking for test storage... 00:06:39.462 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:39.462 17:09:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:39.462 17:09:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:39.462 17:09:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:39.462 17:09:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:39.462 17:09:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:39.462 17:09:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:39.462 17:09:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:39.462 17:09:36 -- scripts/common.sh@335 -- # IFS=.-: 00:06:39.462 17:09:36 -- scripts/common.sh@335 -- # read -ra ver1 00:06:39.462 17:09:36 -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.462 17:09:36 -- scripts/common.sh@336 -- # read -ra ver2 00:06:39.462 17:09:36 -- scripts/common.sh@337 -- # local 'op=<' 00:06:39.462 17:09:36 -- scripts/common.sh@339 -- # ver1_l=2 00:06:39.462 17:09:36 -- scripts/common.sh@340 -- # ver2_l=1 00:06:39.462 17:09:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:39.462 17:09:36 -- scripts/common.sh@343 -- # case "$op" in 00:06:39.462 17:09:36 -- scripts/common.sh@344 -- # : 1 00:06:39.462 17:09:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:39.462 17:09:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.462 17:09:36 -- scripts/common.sh@364 -- # decimal 1 00:06:39.462 17:09:36 -- scripts/common.sh@352 -- # local d=1 00:06:39.462 17:09:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.462 17:09:36 -- scripts/common.sh@354 -- # echo 1 00:06:39.462 17:09:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:39.462 17:09:36 -- scripts/common.sh@365 -- # decimal 2 00:06:39.462 17:09:36 -- scripts/common.sh@352 -- # local d=2 00:06:39.462 17:09:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.462 17:09:36 -- scripts/common.sh@354 -- # echo 2 00:06:39.462 17:09:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:39.462 17:09:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:39.462 17:09:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:39.462 17:09:36 -- scripts/common.sh@367 -- # return 0 00:06:39.462 17:09:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.462 17:09:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:39.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.462 --rc genhtml_branch_coverage=1 00:06:39.462 --rc genhtml_function_coverage=1 00:06:39.462 --rc genhtml_legend=1 00:06:39.462 --rc geninfo_all_blocks=1 00:06:39.462 --rc geninfo_unexecuted_blocks=1 00:06:39.462 00:06:39.462 ' 00:06:39.462 17:09:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:39.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.462 --rc genhtml_branch_coverage=1 00:06:39.462 --rc genhtml_function_coverage=1 00:06:39.462 --rc genhtml_legend=1 00:06:39.462 --rc geninfo_all_blocks=1 00:06:39.462 --rc geninfo_unexecuted_blocks=1 00:06:39.462 00:06:39.462 ' 00:06:39.462 17:09:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:39.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.462 --rc genhtml_branch_coverage=1 00:06:39.462 --rc genhtml_function_coverage=1 00:06:39.462 --rc genhtml_legend=1 00:06:39.462 --rc geninfo_all_blocks=1 00:06:39.462 --rc geninfo_unexecuted_blocks=1 00:06:39.462 00:06:39.462 ' 00:06:39.462 17:09:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:39.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.462 --rc genhtml_branch_coverage=1 00:06:39.462 --rc genhtml_function_coverage=1 00:06:39.462 --rc genhtml_legend=1 00:06:39.462 --rc geninfo_all_blocks=1 00:06:39.462 --rc geninfo_unexecuted_blocks=1 00:06:39.462 00:06:39.462 ' 00:06:39.462 17:09:36 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:39.462 17:09:36 -- bdev/nbd_common.sh@6 -- # set -e 00:06:39.462 17:09:36 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:39.462 17:09:36 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:39.462 17:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.462 17:09:36 -- common/autotest_common.sh@10 -- # set +x 00:06:39.462 ************************************ 00:06:39.462 START TEST event_perf 00:06:39.462 ************************************ 00:06:39.462 17:09:36 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:39.462 Running I/O for 1 seconds...[2024-12-14 17:09:36.063756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:39.462 [2024-12-14 17:09:36.063835] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188428 ] 00:06:39.462 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.721 [2024-12-14 17:09:36.154135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.721 [2024-12-14 17:09:36.192433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.721 [2024-12-14 17:09:36.192545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.721 [2024-12-14 17:09:36.192595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.721 [2024-12-14 17:09:36.192596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.659 Running I/O for 1 seconds... 00:06:40.659 lcore 0: 212504 00:06:40.659 lcore 1: 212505 00:06:40.659 lcore 2: 212505 00:06:40.659 lcore 3: 212504 00:06:40.659 done. 00:06:40.659 00:06:40.659 real 0m1.210s 00:06:40.659 user 0m4.094s 00:06:40.659 sys 0m0.112s 00:06:40.659 17:09:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.659 17:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:40.659 ************************************ 00:06:40.659 END TEST event_perf 00:06:40.659 ************************************ 00:06:40.659 17:09:37 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:40.659 17:09:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:40.659 17:09:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.660 17:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:40.660 ************************************ 00:06:40.660 START TEST event_reactor 00:06:40.660 ************************************ 00:06:40.660 17:09:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:40.660 [2024-12-14 17:09:37.325385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:40.660 [2024-12-14 17:09:37.325468] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188575 ] 00:06:40.919 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.919 [2024-12-14 17:09:37.411938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.919 [2024-12-14 17:09:37.448225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.857 test_start 00:06:41.857 oneshot 00:06:41.857 tick 100 00:06:41.857 tick 100 00:06:41.857 tick 250 00:06:41.857 tick 100 00:06:41.857 tick 100 00:06:41.857 tick 100 00:06:41.857 tick 250 00:06:41.857 tick 500 00:06:41.857 tick 100 00:06:41.857 tick 100 00:06:41.857 tick 250 00:06:41.857 tick 100 00:06:41.857 tick 100 00:06:41.857 test_end 00:06:41.857 00:06:41.857 real 0m1.202s 00:06:41.857 user 0m1.101s 00:06:41.857 sys 0m0.096s 00:06:41.857 17:09:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.857 17:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:41.857 ************************************ 00:06:41.857 END TEST event_reactor 00:06:41.857 ************************************ 00:06:42.117 17:09:38 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:42.117 17:09:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:42.117 17:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.117 17:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:42.117 ************************************ 00:06:42.117 START TEST event_reactor_perf 00:06:42.117 ************************************ 00:06:42.117 17:09:38 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:42.117 [2024-12-14 17:09:38.578046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:42.117 [2024-12-14 17:09:38.578134] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188784 ] 00:06:42.117 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.117 [2024-12-14 17:09:38.665503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.117 [2024-12-14 17:09:38.702069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.495 test_start 00:06:43.495 test_end 00:06:43.495 Performance: 510411 events per second 00:06:43.495 00:06:43.495 real 0m1.205s 00:06:43.495 user 0m1.106s 00:06:43.495 sys 0m0.093s 00:06:43.495 17:09:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.495 17:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:43.495 ************************************ 00:06:43.495 END TEST event_reactor_perf 00:06:43.495 ************************************ 00:06:43.495 17:09:39 -- event/event.sh@49 -- # uname -s 00:06:43.495 17:09:39 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:43.495 17:09:39 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:43.495 17:09:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:43.495 17:09:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.495 17:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:43.495 ************************************ 00:06:43.495 START TEST event_scheduler 00:06:43.495 ************************************ 00:06:43.495 17:09:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:43.495 * Looking for test storage... 00:06:43.495 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:43.495 17:09:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:43.495 17:09:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:43.495 17:09:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:43.495 17:09:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:43.495 17:09:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:43.495 17:09:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:43.495 17:09:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:43.495 17:09:39 -- scripts/common.sh@335 -- # IFS=.-: 00:06:43.495 17:09:39 -- scripts/common.sh@335 -- # read -ra ver1 00:06:43.495 17:09:39 -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.495 17:09:39 -- scripts/common.sh@336 -- # read -ra ver2 00:06:43.495 17:09:39 -- scripts/common.sh@337 -- # local 'op=<' 00:06:43.495 17:09:39 -- scripts/common.sh@339 -- # ver1_l=2 00:06:43.495 17:09:39 -- scripts/common.sh@340 -- # ver2_l=1 00:06:43.495 17:09:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:43.495 17:09:39 -- scripts/common.sh@343 -- # case "$op" in 00:06:43.495 17:09:39 -- scripts/common.sh@344 -- # : 1 00:06:43.496 17:09:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:43.496 17:09:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.496 17:09:39 -- scripts/common.sh@364 -- # decimal 1 00:06:43.496 17:09:39 -- scripts/common.sh@352 -- # local d=1 00:06:43.496 17:09:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.496 17:09:39 -- scripts/common.sh@354 -- # echo 1 00:06:43.496 17:09:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:43.496 17:09:39 -- scripts/common.sh@365 -- # decimal 2 00:06:43.496 17:09:39 -- scripts/common.sh@352 -- # local d=2 00:06:43.496 17:09:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.496 17:09:39 -- scripts/common.sh@354 -- # echo 2 00:06:43.496 17:09:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:43.496 17:09:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:43.496 17:09:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:43.496 17:09:40 -- scripts/common.sh@367 -- # return 0 00:06:43.496 17:09:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.496 17:09:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:43.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.496 --rc genhtml_branch_coverage=1 00:06:43.496 --rc genhtml_function_coverage=1 00:06:43.496 --rc genhtml_legend=1 00:06:43.496 --rc geninfo_all_blocks=1 00:06:43.496 --rc geninfo_unexecuted_blocks=1 00:06:43.496 00:06:43.496 ' 00:06:43.496 17:09:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:43.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.496 --rc genhtml_branch_coverage=1 00:06:43.496 --rc genhtml_function_coverage=1 00:06:43.496 --rc genhtml_legend=1 00:06:43.496 --rc geninfo_all_blocks=1 00:06:43.496 --rc geninfo_unexecuted_blocks=1 00:06:43.496 00:06:43.496 ' 00:06:43.496 17:09:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:43.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.496 --rc genhtml_branch_coverage=1 00:06:43.496 --rc genhtml_function_coverage=1 00:06:43.496 --rc genhtml_legend=1 00:06:43.496 --rc geninfo_all_blocks=1 00:06:43.496 --rc geninfo_unexecuted_blocks=1 00:06:43.496 00:06:43.496 ' 00:06:43.496 17:09:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:43.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.496 --rc genhtml_branch_coverage=1 00:06:43.496 --rc genhtml_function_coverage=1 00:06:43.496 --rc genhtml_legend=1 00:06:43.496 --rc geninfo_all_blocks=1 00:06:43.496 --rc geninfo_unexecuted_blocks=1 00:06:43.496 00:06:43.496 ' 00:06:43.496 17:09:40 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:43.496 17:09:40 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1189104 00:06:43.496 17:09:40 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:43.496 17:09:40 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.496 17:09:40 -- scheduler/scheduler.sh@37 -- # waitforlisten 1189104 00:06:43.496 17:09:40 -- common/autotest_common.sh@829 -- # '[' -z 1189104 ']' 00:06:43.496 17:09:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.496 17:09:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.496 17:09:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.496 17:09:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.496 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:43.496 [2024-12-14 17:09:40.048110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:43.496 [2024-12-14 17:09:40.048166] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189104 ] 00:06:43.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.496 [2024-12-14 17:09:40.132386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.496 [2024-12-14 17:09:40.172566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.496 [2024-12-14 17:09:40.172675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.496 [2024-12-14 17:09:40.172784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.496 [2024-12-14 17:09:40.172785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.433 17:09:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.433 17:09:40 -- common/autotest_common.sh@862 -- # return 0 00:06:44.433 17:09:40 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:44.433 17:09:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.433 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:44.433 POWER: Env isn't set yet! 00:06:44.433 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:44.433 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:44.433 POWER: Cannot set governor of lcore 0 to userspace 00:06:44.433 POWER: Attempting to initialise PSTAT power management... 00:06:44.434 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:44.434 POWER: Initialized successfully for lcore 0 power management 00:06:44.434 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:44.434 POWER: Initialized successfully for lcore 1 power management 00:06:44.434 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:44.434 POWER: Initialized successfully for lcore 2 power management 00:06:44.434 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:44.434 POWER: Initialized successfully for lcore 3 power management 00:06:44.434 [2024-12-14 17:09:40.922685] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:44.434 [2024-12-14 17:09:40.922701] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:44.434 [2024-12-14 17:09:40.922709] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:44.434 17:09:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.434 17:09:40 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:44.434 17:09:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.434 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:44.434 [2024-12-14 17:09:40.985970] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:44.434 17:09:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.434 17:09:40 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:44.434 17:09:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:44.434 17:09:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.434 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:44.434 ************************************ 00:06:44.434 START TEST scheduler_create_thread 00:06:44.434 ************************************ 00:06:44.434 17:09:40 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:44.434 17:09:40 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:44.434 17:09:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.434 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:44.434 2 00:06:44.434 17:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.434 17:09:41 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:44.434 17:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.434 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:44.434 3 00:06:44.434 17:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.434 17:09:41 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:44.434 17:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.434 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:44.434 4 00:06:44.434 17:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.434 17:09:41 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:44.434 17:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.434 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:44.434 5 00:06:44.434 17:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.434 17:09:41 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:44.434 17:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.434 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:44.434 6 00:06:44.434 17:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.434 17:09:41 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:44.434 17:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.434 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:44.434 7 00:06:44.434 17:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.434 17:09:41 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:44.434 17:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.434 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:44.434 8 00:06:44.434 17:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.434 17:09:41 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:44.434 17:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.434 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:44.434 9 00:06:44.434 17:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.434 17:09:41 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:44.434 17:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.434 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:44.434 10 00:06:44.434 17:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.434 17:09:41 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:44.434 17:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.434 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:44.434 17:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.434 17:09:41 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:44.434 17:09:41 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:44.434 17:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.434 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:45.371 17:09:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.371 17:09:41 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:45.371 17:09:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.371 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:46.749 17:09:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.749 17:09:43 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:46.749 17:09:43 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:46.749 17:09:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.749 17:09:43 -- common/autotest_common.sh@10 -- # set +x 00:06:48.128 17:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.128 00:06:48.128 real 0m3.384s 00:06:48.128 user 0m0.027s 00:06:48.128 sys 0m0.003s 00:06:48.128 17:09:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.128 17:09:44 -- common/autotest_common.sh@10 -- # set +x 00:06:48.128 ************************************ 00:06:48.128 END TEST scheduler_create_thread 00:06:48.128 ************************************ 00:06:48.128 17:09:44 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:48.128 17:09:44 -- scheduler/scheduler.sh@46 -- # killprocess 1189104 00:06:48.128 17:09:44 -- common/autotest_common.sh@936 -- # '[' -z 1189104 ']' 00:06:48.128 17:09:44 -- common/autotest_common.sh@940 -- # kill -0 1189104 00:06:48.128 17:09:44 -- common/autotest_common.sh@941 -- # uname 00:06:48.128 17:09:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:48.128 17:09:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1189104 00:06:48.128 17:09:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:48.128 17:09:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:48.128 17:09:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1189104' 00:06:48.128 killing process with pid 1189104 00:06:48.128 17:09:44 -- common/autotest_common.sh@955 -- # kill 1189104 00:06:48.128 17:09:44 -- common/autotest_common.sh@960 -- # wait 1189104 00:06:48.128 [2024-12-14 17:09:44.757898] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:48.387 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:48.387 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:48.387 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:48.387 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:48.387 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:48.387 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:48.387 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:48.387 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:48.387 00:06:48.387 real 0m5.161s 00:06:48.387 user 0m10.615s 00:06:48.387 sys 0m0.433s 00:06:48.387 17:09:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.387 17:09:44 -- common/autotest_common.sh@10 -- # set +x 00:06:48.387 ************************************ 00:06:48.387 END TEST event_scheduler 00:06:48.387 ************************************ 00:06:48.387 17:09:45 -- event/event.sh@51 -- # modprobe -n nbd 00:06:48.387 17:09:45 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:48.387 17:09:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:48.387 17:09:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.387 17:09:45 -- common/autotest_common.sh@10 -- # set +x 00:06:48.387 ************************************ 00:06:48.387 START TEST app_repeat 00:06:48.387 ************************************ 00:06:48.387 17:09:45 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:48.387 17:09:45 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.387 17:09:45 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.387 17:09:45 -- event/event.sh@13 -- # local nbd_list 00:06:48.387 17:09:45 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.387 17:09:45 -- event/event.sh@14 -- # local bdev_list 00:06:48.387 17:09:45 -- event/event.sh@15 -- # local repeat_times=4 00:06:48.387 17:09:45 -- event/event.sh@17 -- # modprobe nbd 00:06:48.387 17:09:45 -- event/event.sh@19 -- # repeat_pid=1190037 00:06:48.387 17:09:45 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.387 17:09:45 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1190037' 00:06:48.387 Process app_repeat pid: 1190037 00:06:48.387 17:09:45 -- event/event.sh@23 -- # for i in {0..2} 00:06:48.387 17:09:45 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:48.387 spdk_app_start Round 0 00:06:48.387 17:09:45 -- event/event.sh@25 -- # waitforlisten 1190037 /var/tmp/spdk-nbd.sock 00:06:48.387 17:09:45 -- common/autotest_common.sh@829 -- # '[' -z 1190037 ']' 00:06:48.387 17:09:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.387 17:09:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.387 17:09:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.387 17:09:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.387 17:09:45 -- common/autotest_common.sh@10 -- # set +x 00:06:48.387 17:09:45 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:48.387 [2024-12-14 17:09:45.063407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:48.387 [2024-12-14 17:09:45.063478] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190037 ] 00:06:48.648 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.648 [2024-12-14 17:09:45.134440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.648 [2024-12-14 17:09:45.172256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.648 [2024-12-14 17:09:45.172259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.214 17:09:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.214 17:09:45 -- common/autotest_common.sh@862 -- # return 0 00:06:49.214 17:09:45 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.474 Malloc0 00:06:49.474 17:09:46 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.733 Malloc1 00:06:49.733 17:09:46 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@12 -- # local i 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.733 17:09:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.993 /dev/nbd0 00:06:49.993 17:09:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.993 17:09:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.993 17:09:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:49.993 17:09:46 -- common/autotest_common.sh@867 -- # local i 00:06:49.993 17:09:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.993 17:09:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.993 17:09:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:49.993 17:09:46 -- common/autotest_common.sh@871 -- # break 00:06:49.993 17:09:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.993 17:09:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.993 17:09:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.993 1+0 records in 00:06:49.993 1+0 records out 00:06:49.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234574 s, 17.5 MB/s 00:06:49.993 17:09:46 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.993 17:09:46 -- common/autotest_common.sh@884 -- # size=4096 00:06:49.993 17:09:46 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.993 17:09:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.993 17:09:46 -- common/autotest_common.sh@887 -- # return 0 00:06:49.993 17:09:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.993 17:09:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.993 17:09:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.993 /dev/nbd1 00:06:49.993 17:09:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.993 17:09:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.993 17:09:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:49.993 17:09:46 -- common/autotest_common.sh@867 -- # local i 00:06:49.993 17:09:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.993 17:09:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.993 17:09:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:49.993 17:09:46 -- common/autotest_common.sh@871 -- # break 00:06:49.993 17:09:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:50.252 17:09:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:50.252 17:09:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.252 1+0 records in 00:06:50.252 1+0 records out 00:06:50.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229562 s, 17.8 MB/s 00:06:50.252 17:09:46 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:50.252 17:09:46 -- common/autotest_common.sh@884 -- # size=4096 00:06:50.252 17:09:46 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:50.252 17:09:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:50.252 17:09:46 -- common/autotest_common.sh@887 -- # return 0 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.252 { 00:06:50.252 "nbd_device": "/dev/nbd0", 00:06:50.252 "bdev_name": "Malloc0" 00:06:50.252 }, 00:06:50.252 { 00:06:50.252 "nbd_device": "/dev/nbd1", 00:06:50.252 "bdev_name": "Malloc1" 00:06:50.252 } 00:06:50.252 ]' 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.252 { 00:06:50.252 "nbd_device": "/dev/nbd0", 00:06:50.252 "bdev_name": "Malloc0" 00:06:50.252 }, 00:06:50.252 { 00:06:50.252 "nbd_device": "/dev/nbd1", 00:06:50.252 "bdev_name": "Malloc1" 00:06:50.252 } 00:06:50.252 ]' 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.252 /dev/nbd1' 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.252 /dev/nbd1' 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.252 256+0 records in 00:06:50.252 256+0 records out 00:06:50.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115582 s, 90.7 MB/s 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.252 17:09:46 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.512 256+0 records in 00:06:50.512 256+0 records out 00:06:50.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194521 s, 53.9 MB/s 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.512 256+0 records in 00:06:50.512 256+0 records out 00:06:50.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019805 s, 52.9 MB/s 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.512 17:09:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.512 17:09:47 -- bdev/nbd_common.sh@51 -- # local i 00:06:50.512 17:09:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.512 17:09:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.512 17:09:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@41 -- # break 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@41 -- # break 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.771 17:09:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.030 17:09:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.030 17:09:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.030 17:09:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.030 17:09:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.030 17:09:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.030 17:09:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.030 17:09:47 -- bdev/nbd_common.sh@65 -- # true 00:06:51.030 17:09:47 -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.030 17:09:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.030 17:09:47 -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.030 17:09:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.030 17:09:47 -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.030 17:09:47 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.289 17:09:47 -- event/event.sh@35 -- # sleep 3 00:06:51.549 [2024-12-14 17:09:47.983554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.549 [2024-12-14 17:09:48.016405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.549 [2024-12-14 17:09:48.016407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.549 [2024-12-14 17:09:48.056929] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.549 [2024-12-14 17:09:48.056971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.838 17:09:50 -- event/event.sh@23 -- # for i in {0..2} 00:06:54.838 17:09:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:54.838 spdk_app_start Round 1 00:06:54.838 17:09:50 -- event/event.sh@25 -- # waitforlisten 1190037 /var/tmp/spdk-nbd.sock 00:06:54.838 17:09:50 -- common/autotest_common.sh@829 -- # '[' -z 1190037 ']' 00:06:54.838 17:09:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.838 17:09:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.838 17:09:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.838 17:09:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.838 17:09:50 -- common/autotest_common.sh@10 -- # set +x 00:06:54.839 17:09:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.839 17:09:50 -- common/autotest_common.sh@862 -- # return 0 00:06:54.839 17:09:50 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.839 Malloc0 00:06:54.839 17:09:51 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.839 Malloc1 00:06:54.839 17:09:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@12 -- # local i 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.839 17:09:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:55.097 /dev/nbd0 00:06:55.097 17:09:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:55.097 17:09:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:55.097 17:09:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:55.097 17:09:51 -- common/autotest_common.sh@867 -- # local i 00:06:55.097 17:09:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:55.097 17:09:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:55.097 17:09:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:55.097 17:09:51 -- common/autotest_common.sh@871 -- # break 00:06:55.097 17:09:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:55.097 17:09:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:55.097 17:09:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.097 1+0 records in 00:06:55.097 1+0 records out 00:06:55.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226443 s, 18.1 MB/s 00:06:55.097 17:09:51 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:55.097 17:09:51 -- common/autotest_common.sh@884 -- # size=4096 00:06:55.097 17:09:51 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:55.097 17:09:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:55.097 17:09:51 -- common/autotest_common.sh@887 -- # return 0 00:06:55.097 17:09:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.097 17:09:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.097 17:09:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:55.097 /dev/nbd1 00:06:55.097 17:09:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:55.097 17:09:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:55.097 17:09:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:55.097 17:09:51 -- common/autotest_common.sh@867 -- # local i 00:06:55.097 17:09:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:55.097 17:09:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:55.097 17:09:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:55.097 17:09:51 -- common/autotest_common.sh@871 -- # break 00:06:55.097 17:09:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:55.097 17:09:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:55.097 17:09:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.430 1+0 records in 00:06:55.430 1+0 records out 00:06:55.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250333 s, 16.4 MB/s 00:06:55.430 17:09:51 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:55.430 17:09:51 -- common/autotest_common.sh@884 -- # size=4096 00:06:55.430 17:09:51 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:55.430 17:09:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:55.430 17:09:51 -- common/autotest_common.sh@887 -- # return 0 00:06:55.430 17:09:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.430 17:09:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.430 17:09:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.430 17:09:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.430 17:09:51 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.430 17:09:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.430 { 00:06:55.430 "nbd_device": "/dev/nbd0", 00:06:55.430 "bdev_name": "Malloc0" 00:06:55.430 }, 00:06:55.430 { 00:06:55.430 "nbd_device": "/dev/nbd1", 00:06:55.430 "bdev_name": "Malloc1" 00:06:55.430 } 00:06:55.430 ]' 00:06:55.430 17:09:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.430 { 00:06:55.430 "nbd_device": "/dev/nbd0", 00:06:55.430 "bdev_name": "Malloc0" 00:06:55.430 }, 00:06:55.430 { 00:06:55.430 "nbd_device": "/dev/nbd1", 00:06:55.430 "bdev_name": "Malloc1" 00:06:55.430 } 00:06:55.430 ]' 00:06:55.430 17:09:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:55.430 /dev/nbd1' 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:55.430 /dev/nbd1' 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@65 -- # count=2 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@95 -- # count=2 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:55.430 256+0 records in 00:06:55.430 256+0 records out 00:06:55.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110096 s, 95.2 MB/s 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:55.430 256+0 records in 00:06:55.430 256+0 records out 00:06:55.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168441 s, 62.3 MB/s 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:55.430 256+0 records in 00:06:55.430 256+0 records out 00:06:55.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202033 s, 51.9 MB/s 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@51 -- # local i 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.430 17:09:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.753 17:09:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.753 17:09:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.753 17:09:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.753 17:09:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.753 17:09:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.753 17:09:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.753 17:09:52 -- bdev/nbd_common.sh@41 -- # break 00:06:55.753 17:09:52 -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.753 17:09:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.753 17:09:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@41 -- # break 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:56.012 17:09:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.272 17:09:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:56.272 17:09:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.272 17:09:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:56.272 17:09:52 -- bdev/nbd_common.sh@65 -- # true 00:06:56.272 17:09:52 -- bdev/nbd_common.sh@65 -- # count=0 00:06:56.272 17:09:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:56.272 17:09:52 -- bdev/nbd_common.sh@104 -- # count=0 00:06:56.272 17:09:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:56.272 17:09:52 -- bdev/nbd_common.sh@109 -- # return 0 00:06:56.272 17:09:52 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:56.272 17:09:52 -- event/event.sh@35 -- # sleep 3 00:06:56.531 [2024-12-14 17:09:53.107201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.531 [2024-12-14 17:09:53.140707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.531 [2024-12-14 17:09:53.140710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.531 [2024-12-14 17:09:53.181197] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:56.531 [2024-12-14 17:09:53.181252] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:59.819 17:09:55 -- event/event.sh@23 -- # for i in {0..2} 00:06:59.819 17:09:55 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:59.819 spdk_app_start Round 2 00:06:59.819 17:09:55 -- event/event.sh@25 -- # waitforlisten 1190037 /var/tmp/spdk-nbd.sock 00:06:59.819 17:09:55 -- common/autotest_common.sh@829 -- # '[' -z 1190037 ']' 00:06:59.819 17:09:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.819 17:09:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.819 17:09:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.819 17:09:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.819 17:09:55 -- common/autotest_common.sh@10 -- # set +x 00:06:59.819 17:09:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.819 17:09:56 -- common/autotest_common.sh@862 -- # return 0 00:06:59.819 17:09:56 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.819 Malloc0 00:06:59.819 17:09:56 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.819 Malloc1 00:06:59.819 17:09:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.819 17:09:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.819 17:09:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.819 17:09:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:59.819 17:09:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.819 17:09:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:59.819 17:09:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.819 17:09:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.819 17:09:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.819 17:09:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:59.820 17:09:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.820 17:09:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:59.820 17:09:56 -- bdev/nbd_common.sh@12 -- # local i 00:06:59.820 17:09:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:59.820 17:09:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.820 17:09:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:00.078 /dev/nbd0 00:07:00.078 17:09:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.078 17:09:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.078 17:09:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:00.078 17:09:56 -- common/autotest_common.sh@867 -- # local i 00:07:00.078 17:09:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:00.079 17:09:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:00.079 17:09:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:00.079 17:09:56 -- common/autotest_common.sh@871 -- # break 00:07:00.079 17:09:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:00.079 17:09:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:00.079 17:09:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.079 1+0 records in 00:07:00.079 1+0 records out 00:07:00.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239202 s, 17.1 MB/s 00:07:00.079 17:09:56 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:00.079 17:09:56 -- common/autotest_common.sh@884 -- # size=4096 00:07:00.079 17:09:56 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:00.079 17:09:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:00.079 17:09:56 -- common/autotest_common.sh@887 -- # return 0 00:07:00.079 17:09:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.079 17:09:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.079 17:09:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:00.337 /dev/nbd1 00:07:00.338 17:09:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.338 17:09:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.338 17:09:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:00.338 17:09:56 -- common/autotest_common.sh@867 -- # local i 00:07:00.338 17:09:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:00.338 17:09:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:00.338 17:09:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:00.338 17:09:56 -- common/autotest_common.sh@871 -- # break 00:07:00.338 17:09:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:00.338 17:09:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:00.338 17:09:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.338 1+0 records in 00:07:00.338 1+0 records out 00:07:00.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243062 s, 16.9 MB/s 00:07:00.338 17:09:56 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:00.338 17:09:56 -- common/autotest_common.sh@884 -- # size=4096 00:07:00.338 17:09:56 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:00.338 17:09:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:00.338 17:09:56 -- common/autotest_common.sh@887 -- # return 0 00:07:00.338 17:09:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.338 17:09:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.338 17:09:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.338 17:09:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.338 17:09:56 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.597 { 00:07:00.597 "nbd_device": "/dev/nbd0", 00:07:00.597 "bdev_name": "Malloc0" 00:07:00.597 }, 00:07:00.597 { 00:07:00.597 "nbd_device": "/dev/nbd1", 00:07:00.597 "bdev_name": "Malloc1" 00:07:00.597 } 00:07:00.597 ]' 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.597 { 00:07:00.597 "nbd_device": "/dev/nbd0", 00:07:00.597 "bdev_name": "Malloc0" 00:07:00.597 }, 00:07:00.597 { 00:07:00.597 "nbd_device": "/dev/nbd1", 00:07:00.597 "bdev_name": "Malloc1" 00:07:00.597 } 00:07:00.597 ]' 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.597 /dev/nbd1' 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.597 /dev/nbd1' 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@65 -- # count=2 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@95 -- # count=2 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:00.597 256+0 records in 00:07:00.597 256+0 records out 00:07:00.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01138 s, 92.1 MB/s 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.597 256+0 records in 00:07:00.597 256+0 records out 00:07:00.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190293 s, 55.1 MB/s 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.597 256+0 records in 00:07:00.597 256+0 records out 00:07:00.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184097 s, 57.0 MB/s 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@51 -- # local i 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.597 17:09:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.856 17:09:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.856 17:09:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.856 17:09:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.856 17:09:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.856 17:09:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.856 17:09:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.856 17:09:57 -- bdev/nbd_common.sh@41 -- # break 00:07:00.856 17:09:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.857 17:09:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.857 17:09:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:01.116 17:09:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:01.116 17:09:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:01.116 17:09:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:01.116 17:09:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.116 17:09:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.116 17:09:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:01.116 17:09:57 -- bdev/nbd_common.sh@41 -- # break 00:07:01.116 17:09:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.116 17:09:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.116 17:09:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.116 17:09:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.375 17:09:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.375 17:09:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.375 17:09:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.375 17:09:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.375 17:09:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.375 17:09:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.375 17:09:57 -- bdev/nbd_common.sh@65 -- # true 00:07:01.375 17:09:57 -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.375 17:09:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.375 17:09:57 -- bdev/nbd_common.sh@104 -- # count=0 00:07:01.375 17:09:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:01.375 17:09:57 -- bdev/nbd_common.sh@109 -- # return 0 00:07:01.375 17:09:57 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:01.375 17:09:58 -- event/event.sh@35 -- # sleep 3 00:07:01.634 [2024-12-14 17:09:58.220431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.634 [2024-12-14 17:09:58.252213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.634 [2024-12-14 17:09:58.252216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.634 [2024-12-14 17:09:58.292631] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.634 [2024-12-14 17:09:58.292676] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.921 17:10:01 -- event/event.sh@38 -- # waitforlisten 1190037 /var/tmp/spdk-nbd.sock 00:07:04.921 17:10:01 -- common/autotest_common.sh@829 -- # '[' -z 1190037 ']' 00:07:04.921 17:10:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.921 17:10:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.921 17:10:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.921 17:10:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.921 17:10:01 -- common/autotest_common.sh@10 -- # set +x 00:07:04.921 17:10:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.921 17:10:01 -- common/autotest_common.sh@862 -- # return 0 00:07:04.921 17:10:01 -- event/event.sh@39 -- # killprocess 1190037 00:07:04.921 17:10:01 -- common/autotest_common.sh@936 -- # '[' -z 1190037 ']' 00:07:04.921 17:10:01 -- common/autotest_common.sh@940 -- # kill -0 1190037 00:07:04.921 17:10:01 -- common/autotest_common.sh@941 -- # uname 00:07:04.921 17:10:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:04.921 17:10:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1190037 00:07:04.921 17:10:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:04.921 17:10:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:04.921 17:10:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1190037' 00:07:04.921 killing process with pid 1190037 00:07:04.921 17:10:01 -- common/autotest_common.sh@955 -- # kill 1190037 00:07:04.921 17:10:01 -- common/autotest_common.sh@960 -- # wait 1190037 00:07:04.921 spdk_app_start is called in Round 0. 00:07:04.921 Shutdown signal received, stop current app iteration 00:07:04.921 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:07:04.921 spdk_app_start is called in Round 1. 00:07:04.921 Shutdown signal received, stop current app iteration 00:07:04.921 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:07:04.921 spdk_app_start is called in Round 2. 00:07:04.921 Shutdown signal received, stop current app iteration 00:07:04.921 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:07:04.921 spdk_app_start is called in Round 3. 00:07:04.921 Shutdown signal received, stop current app iteration 00:07:04.921 17:10:01 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:04.921 17:10:01 -- event/event.sh@42 -- # return 0 00:07:04.921 00:07:04.921 real 0m16.409s 00:07:04.921 user 0m35.250s 00:07:04.921 sys 0m2.915s 00:07:04.921 17:10:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.921 17:10:01 -- common/autotest_common.sh@10 -- # set +x 00:07:04.921 ************************************ 00:07:04.921 END TEST app_repeat 00:07:04.921 ************************************ 00:07:04.921 17:10:01 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:04.921 17:10:01 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:04.921 17:10:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:04.921 17:10:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.921 17:10:01 -- common/autotest_common.sh@10 -- # set +x 00:07:04.921 ************************************ 00:07:04.921 START TEST cpu_locks 00:07:04.921 ************************************ 00:07:04.921 17:10:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:04.921 * Looking for test storage... 00:07:04.921 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:04.921 17:10:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:04.921 17:10:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:04.921 17:10:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:05.180 17:10:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:05.180 17:10:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:05.180 17:10:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:05.180 17:10:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:05.180 17:10:01 -- scripts/common.sh@335 -- # IFS=.-: 00:07:05.180 17:10:01 -- scripts/common.sh@335 -- # read -ra ver1 00:07:05.180 17:10:01 -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.180 17:10:01 -- scripts/common.sh@336 -- # read -ra ver2 00:07:05.180 17:10:01 -- scripts/common.sh@337 -- # local 'op=<' 00:07:05.180 17:10:01 -- scripts/common.sh@339 -- # ver1_l=2 00:07:05.180 17:10:01 -- scripts/common.sh@340 -- # ver2_l=1 00:07:05.180 17:10:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:05.180 17:10:01 -- scripts/common.sh@343 -- # case "$op" in 00:07:05.180 17:10:01 -- scripts/common.sh@344 -- # : 1 00:07:05.180 17:10:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:05.180 17:10:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.180 17:10:01 -- scripts/common.sh@364 -- # decimal 1 00:07:05.180 17:10:01 -- scripts/common.sh@352 -- # local d=1 00:07:05.180 17:10:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.180 17:10:01 -- scripts/common.sh@354 -- # echo 1 00:07:05.180 17:10:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:05.180 17:10:01 -- scripts/common.sh@365 -- # decimal 2 00:07:05.180 17:10:01 -- scripts/common.sh@352 -- # local d=2 00:07:05.180 17:10:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.180 17:10:01 -- scripts/common.sh@354 -- # echo 2 00:07:05.180 17:10:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:05.180 17:10:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:05.180 17:10:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:05.180 17:10:01 -- scripts/common.sh@367 -- # return 0 00:07:05.180 17:10:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.180 17:10:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:05.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.180 --rc genhtml_branch_coverage=1 00:07:05.180 --rc genhtml_function_coverage=1 00:07:05.180 --rc genhtml_legend=1 00:07:05.180 --rc geninfo_all_blocks=1 00:07:05.180 --rc geninfo_unexecuted_blocks=1 00:07:05.180 00:07:05.180 ' 00:07:05.180 17:10:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:05.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.180 --rc genhtml_branch_coverage=1 00:07:05.180 --rc genhtml_function_coverage=1 00:07:05.180 --rc genhtml_legend=1 00:07:05.180 --rc geninfo_all_blocks=1 00:07:05.180 --rc geninfo_unexecuted_blocks=1 00:07:05.180 00:07:05.180 ' 00:07:05.180 17:10:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:05.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.180 --rc genhtml_branch_coverage=1 00:07:05.180 --rc genhtml_function_coverage=1 00:07:05.180 --rc genhtml_legend=1 00:07:05.180 --rc geninfo_all_blocks=1 00:07:05.180 --rc geninfo_unexecuted_blocks=1 00:07:05.180 00:07:05.180 ' 00:07:05.180 17:10:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:05.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.180 --rc genhtml_branch_coverage=1 00:07:05.180 --rc genhtml_function_coverage=1 00:07:05.180 --rc genhtml_legend=1 00:07:05.180 --rc geninfo_all_blocks=1 00:07:05.180 --rc geninfo_unexecuted_blocks=1 00:07:05.180 00:07:05.180 ' 00:07:05.180 17:10:01 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:05.180 17:10:01 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:05.180 17:10:01 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:05.180 17:10:01 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:05.180 17:10:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.180 17:10:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.180 17:10:01 -- common/autotest_common.sh@10 -- # set +x 00:07:05.180 ************************************ 00:07:05.180 START TEST default_locks 00:07:05.180 ************************************ 00:07:05.180 17:10:01 -- common/autotest_common.sh@1114 -- # default_locks 00:07:05.180 17:10:01 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1193170 00:07:05.180 17:10:01 -- event/cpu_locks.sh@47 -- # waitforlisten 1193170 00:07:05.180 17:10:01 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.180 17:10:01 -- common/autotest_common.sh@829 -- # '[' -z 1193170 ']' 00:07:05.180 17:10:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.180 17:10:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.180 17:10:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.180 17:10:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.180 17:10:01 -- common/autotest_common.sh@10 -- # set +x 00:07:05.180 [2024-12-14 17:10:01.723382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:05.181 [2024-12-14 17:10:01.723434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193170 ] 00:07:05.181 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.181 [2024-12-14 17:10:01.791985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.181 [2024-12-14 17:10:01.827483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:05.181 [2024-12-14 17:10:01.827619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.117 17:10:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.117 17:10:02 -- common/autotest_common.sh@862 -- # return 0 00:07:06.117 17:10:02 -- event/cpu_locks.sh@49 -- # locks_exist 1193170 00:07:06.117 17:10:02 -- event/cpu_locks.sh@22 -- # lslocks -p 1193170 00:07:06.117 17:10:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.685 lslocks: write error 00:07:06.685 17:10:03 -- event/cpu_locks.sh@50 -- # killprocess 1193170 00:07:06.685 17:10:03 -- common/autotest_common.sh@936 -- # '[' -z 1193170 ']' 00:07:06.685 17:10:03 -- common/autotest_common.sh@940 -- # kill -0 1193170 00:07:06.685 17:10:03 -- common/autotest_common.sh@941 -- # uname 00:07:06.685 17:10:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:06.685 17:10:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1193170 00:07:06.685 17:10:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.685 17:10:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.685 17:10:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1193170' 00:07:06.685 killing process with pid 1193170 00:07:06.685 17:10:03 -- common/autotest_common.sh@955 -- # kill 1193170 00:07:06.685 17:10:03 -- common/autotest_common.sh@960 -- # wait 1193170 00:07:06.944 17:10:03 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1193170 00:07:06.944 17:10:03 -- common/autotest_common.sh@650 -- # local es=0 00:07:06.944 17:10:03 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1193170 00:07:06.944 17:10:03 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:06.944 17:10:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.944 17:10:03 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:06.944 17:10:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.944 17:10:03 -- common/autotest_common.sh@653 -- # waitforlisten 1193170 00:07:06.944 17:10:03 -- common/autotest_common.sh@829 -- # '[' -z 1193170 ']' 00:07:06.944 17:10:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.944 17:10:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.944 17:10:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.944 17:10:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.944 17:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:06.944 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1193170) - No such process 00:07:06.944 ERROR: process (pid: 1193170) is no longer running 00:07:06.944 17:10:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.944 17:10:03 -- common/autotest_common.sh@862 -- # return 1 00:07:06.944 17:10:03 -- common/autotest_common.sh@653 -- # es=1 00:07:06.944 17:10:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.944 17:10:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.944 17:10:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.944 17:10:03 -- event/cpu_locks.sh@54 -- # no_locks 00:07:06.944 17:10:03 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:06.944 17:10:03 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:06.944 17:10:03 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:06.944 00:07:06.944 real 0m1.859s 00:07:06.944 user 0m1.958s 00:07:06.944 sys 0m0.697s 00:07:06.944 17:10:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.944 17:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:06.944 ************************************ 00:07:06.944 END TEST default_locks 00:07:06.944 ************************************ 00:07:06.944 17:10:03 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:06.944 17:10:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:06.944 17:10:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.944 17:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:06.944 ************************************ 00:07:06.944 START TEST default_locks_via_rpc 00:07:06.944 ************************************ 00:07:06.944 17:10:03 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:07:06.944 17:10:03 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1193560 00:07:06.944 17:10:03 -- event/cpu_locks.sh@63 -- # waitforlisten 1193560 00:07:06.944 17:10:03 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.944 17:10:03 -- common/autotest_common.sh@829 -- # '[' -z 1193560 ']' 00:07:06.944 17:10:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.944 17:10:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.944 17:10:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.944 17:10:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.944 17:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:07.203 [2024-12-14 17:10:03.628361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.203 [2024-12-14 17:10:03.628422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193560 ] 00:07:07.203 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.203 [2024-12-14 17:10:03.698565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.203 [2024-12-14 17:10:03.735235] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.203 [2024-12-14 17:10:03.735356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.771 17:10:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.771 17:10:04 -- common/autotest_common.sh@862 -- # return 0 00:07:07.771 17:10:04 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:07.771 17:10:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.771 17:10:04 -- common/autotest_common.sh@10 -- # set +x 00:07:07.771 17:10:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.771 17:10:04 -- event/cpu_locks.sh@67 -- # no_locks 00:07:07.771 17:10:04 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:07.771 17:10:04 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:07.771 17:10:04 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:07.771 17:10:04 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:07.771 17:10:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.771 17:10:04 -- common/autotest_common.sh@10 -- # set +x 00:07:07.771 17:10:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.771 17:10:04 -- event/cpu_locks.sh@71 -- # locks_exist 1193560 00:07:07.771 17:10:04 -- event/cpu_locks.sh@22 -- # lslocks -p 1193560 00:07:07.771 17:10:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.338 17:10:04 -- event/cpu_locks.sh@73 -- # killprocess 1193560 00:07:08.338 17:10:04 -- common/autotest_common.sh@936 -- # '[' -z 1193560 ']' 00:07:08.338 17:10:04 -- common/autotest_common.sh@940 -- # kill -0 1193560 00:07:08.338 17:10:04 -- common/autotest_common.sh@941 -- # uname 00:07:08.338 17:10:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.338 17:10:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1193560 00:07:08.596 17:10:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:08.596 17:10:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:08.596 17:10:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1193560' 00:07:08.596 killing process with pid 1193560 00:07:08.596 17:10:05 -- common/autotest_common.sh@955 -- # kill 1193560 00:07:08.596 17:10:05 -- common/autotest_common.sh@960 -- # wait 1193560 00:07:08.855 00:07:08.855 real 0m1.764s 00:07:08.855 user 0m1.866s 00:07:08.855 sys 0m0.605s 00:07:08.855 17:10:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.855 17:10:05 -- common/autotest_common.sh@10 -- # set +x 00:07:08.855 ************************************ 00:07:08.855 END TEST default_locks_via_rpc 00:07:08.855 ************************************ 00:07:08.855 17:10:05 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:08.855 17:10:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.855 17:10:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.855 17:10:05 -- common/autotest_common.sh@10 -- # set +x 00:07:08.855 ************************************ 00:07:08.855 START TEST non_locking_app_on_locked_coremask 00:07:08.855 ************************************ 00:07:08.855 17:10:05 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:07:08.855 17:10:05 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1193989 00:07:08.855 17:10:05 -- event/cpu_locks.sh@81 -- # waitforlisten 1193989 /var/tmp/spdk.sock 00:07:08.855 17:10:05 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.855 17:10:05 -- common/autotest_common.sh@829 -- # '[' -z 1193989 ']' 00:07:08.855 17:10:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.855 17:10:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.855 17:10:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.855 17:10:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.855 17:10:05 -- common/autotest_common.sh@10 -- # set +x 00:07:08.855 [2024-12-14 17:10:05.440800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:08.855 [2024-12-14 17:10:05.440860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193989 ] 00:07:08.855 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.855 [2024-12-14 17:10:05.510108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.113 [2024-12-14 17:10:05.547337] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.113 [2024-12-14 17:10:05.547450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.680 17:10:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.680 17:10:06 -- common/autotest_common.sh@862 -- # return 0 00:07:09.680 17:10:06 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:09.680 17:10:06 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1194041 00:07:09.680 17:10:06 -- event/cpu_locks.sh@85 -- # waitforlisten 1194041 /var/tmp/spdk2.sock 00:07:09.680 17:10:06 -- common/autotest_common.sh@829 -- # '[' -z 1194041 ']' 00:07:09.680 17:10:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.680 17:10:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.680 17:10:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.680 17:10:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.680 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:07:09.680 [2024-12-14 17:10:06.269415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:09.680 [2024-12-14 17:10:06.269470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194041 ] 00:07:09.680 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.680 [2024-12-14 17:10:06.363755] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.680 [2024-12-14 17:10:06.363777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.939 [2024-12-14 17:10:06.436044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.939 [2024-12-14 17:10:06.436160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.507 17:10:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.507 17:10:07 -- common/autotest_common.sh@862 -- # return 0 00:07:10.507 17:10:07 -- event/cpu_locks.sh@87 -- # locks_exist 1193989 00:07:10.507 17:10:07 -- event/cpu_locks.sh@22 -- # lslocks -p 1193989 00:07:10.507 17:10:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.441 lslocks: write error 00:07:11.441 17:10:07 -- event/cpu_locks.sh@89 -- # killprocess 1193989 00:07:11.441 17:10:07 -- common/autotest_common.sh@936 -- # '[' -z 1193989 ']' 00:07:11.441 17:10:07 -- common/autotest_common.sh@940 -- # kill -0 1193989 00:07:11.441 17:10:07 -- common/autotest_common.sh@941 -- # uname 00:07:11.441 17:10:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:11.441 17:10:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1193989 00:07:11.441 17:10:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:11.441 17:10:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:11.441 17:10:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1193989' 00:07:11.442 killing process with pid 1193989 00:07:11.442 17:10:07 -- common/autotest_common.sh@955 -- # kill 1193989 00:07:11.442 17:10:07 -- common/autotest_common.sh@960 -- # wait 1193989 00:07:12.008 17:10:08 -- event/cpu_locks.sh@90 -- # killprocess 1194041 00:07:12.008 17:10:08 -- common/autotest_common.sh@936 -- # '[' -z 1194041 ']' 00:07:12.008 17:10:08 -- common/autotest_common.sh@940 -- # kill -0 1194041 00:07:12.008 17:10:08 -- common/autotest_common.sh@941 -- # uname 00:07:12.008 17:10:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:12.008 17:10:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1194041 00:07:12.008 17:10:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:12.008 17:10:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:12.008 17:10:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1194041' 00:07:12.008 killing process with pid 1194041 00:07:12.008 17:10:08 -- common/autotest_common.sh@955 -- # kill 1194041 00:07:12.008 17:10:08 -- common/autotest_common.sh@960 -- # wait 1194041 00:07:12.267 00:07:12.267 real 0m3.526s 00:07:12.267 user 0m3.793s 00:07:12.267 sys 0m1.131s 00:07:12.267 17:10:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.267 17:10:08 -- common/autotest_common.sh@10 -- # set +x 00:07:12.267 ************************************ 00:07:12.267 END TEST non_locking_app_on_locked_coremask 00:07:12.267 ************************************ 00:07:12.526 17:10:08 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:12.526 17:10:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.526 17:10:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.526 17:10:08 -- common/autotest_common.sh@10 -- # set +x 00:07:12.526 ************************************ 00:07:12.526 START TEST locking_app_on_unlocked_coremask 00:07:12.526 ************************************ 00:07:12.526 17:10:08 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:07:12.526 17:10:08 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1194609 00:07:12.526 17:10:08 -- event/cpu_locks.sh@99 -- # waitforlisten 1194609 /var/tmp/spdk.sock 00:07:12.526 17:10:08 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:12.526 17:10:08 -- common/autotest_common.sh@829 -- # '[' -z 1194609 ']' 00:07:12.526 17:10:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.526 17:10:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.526 17:10:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.526 17:10:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.526 17:10:08 -- common/autotest_common.sh@10 -- # set +x 00:07:12.526 [2024-12-14 17:10:09.021029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:12.526 [2024-12-14 17:10:09.021082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194609 ] 00:07:12.526 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.526 [2024-12-14 17:10:09.089236] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:12.526 [2024-12-14 17:10:09.089268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.526 [2024-12-14 17:10:09.121655] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:12.526 [2024-12-14 17:10:09.121780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.462 17:10:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.462 17:10:09 -- common/autotest_common.sh@862 -- # return 0 00:07:13.462 17:10:09 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1194757 00:07:13.462 17:10:09 -- event/cpu_locks.sh@103 -- # waitforlisten 1194757 /var/tmp/spdk2.sock 00:07:13.462 17:10:09 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:13.462 17:10:09 -- common/autotest_common.sh@829 -- # '[' -z 1194757 ']' 00:07:13.462 17:10:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.462 17:10:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.462 17:10:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.462 17:10:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.462 17:10:09 -- common/autotest_common.sh@10 -- # set +x 00:07:13.462 [2024-12-14 17:10:09.864844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.462 [2024-12-14 17:10:09.864897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194757 ] 00:07:13.462 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.462 [2024-12-14 17:10:09.962032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.462 [2024-12-14 17:10:10.035384] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:13.462 [2024-12-14 17:10:10.035530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.030 17:10:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.030 17:10:10 -- common/autotest_common.sh@862 -- # return 0 00:07:14.030 17:10:10 -- event/cpu_locks.sh@105 -- # locks_exist 1194757 00:07:14.030 17:10:10 -- event/cpu_locks.sh@22 -- # lslocks -p 1194757 00:07:14.030 17:10:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.966 lslocks: write error 00:07:14.966 17:10:11 -- event/cpu_locks.sh@107 -- # killprocess 1194609 00:07:14.966 17:10:11 -- common/autotest_common.sh@936 -- # '[' -z 1194609 ']' 00:07:14.966 17:10:11 -- common/autotest_common.sh@940 -- # kill -0 1194609 00:07:14.966 17:10:11 -- common/autotest_common.sh@941 -- # uname 00:07:14.966 17:10:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:14.966 17:10:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1194609 00:07:14.966 17:10:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:14.966 17:10:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:14.966 17:10:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1194609' 00:07:14.966 killing process with pid 1194609 00:07:14.966 17:10:11 -- common/autotest_common.sh@955 -- # kill 1194609 00:07:14.966 17:10:11 -- common/autotest_common.sh@960 -- # wait 1194609 00:07:15.534 17:10:12 -- event/cpu_locks.sh@108 -- # killprocess 1194757 00:07:15.534 17:10:12 -- common/autotest_common.sh@936 -- # '[' -z 1194757 ']' 00:07:15.534 17:10:12 -- common/autotest_common.sh@940 -- # kill -0 1194757 00:07:15.534 17:10:12 -- common/autotest_common.sh@941 -- # uname 00:07:15.534 17:10:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.534 17:10:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1194757 00:07:15.793 17:10:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.793 17:10:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.793 17:10:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1194757' 00:07:15.793 killing process with pid 1194757 00:07:15.793 17:10:12 -- common/autotest_common.sh@955 -- # kill 1194757 00:07:15.793 17:10:12 -- common/autotest_common.sh@960 -- # wait 1194757 00:07:16.052 00:07:16.052 real 0m3.572s 00:07:16.052 user 0m3.850s 00:07:16.052 sys 0m1.150s 00:07:16.052 17:10:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.052 17:10:12 -- common/autotest_common.sh@10 -- # set +x 00:07:16.052 ************************************ 00:07:16.052 END TEST locking_app_on_unlocked_coremask 00:07:16.052 ************************************ 00:07:16.052 17:10:12 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:16.052 17:10:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.052 17:10:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.052 17:10:12 -- common/autotest_common.sh@10 -- # set +x 00:07:16.052 ************************************ 00:07:16.053 START TEST locking_app_on_locked_coremask 00:07:16.053 ************************************ 00:07:16.053 17:10:12 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:07:16.053 17:10:12 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1195200 00:07:16.053 17:10:12 -- event/cpu_locks.sh@116 -- # waitforlisten 1195200 /var/tmp/spdk.sock 00:07:16.053 17:10:12 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.053 17:10:12 -- common/autotest_common.sh@829 -- # '[' -z 1195200 ']' 00:07:16.053 17:10:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.053 17:10:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.053 17:10:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.053 17:10:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.053 17:10:12 -- common/autotest_common.sh@10 -- # set +x 00:07:16.053 [2024-12-14 17:10:12.642822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.053 [2024-12-14 17:10:12.642876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195200 ] 00:07:16.053 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.053 [2024-12-14 17:10:12.713382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.311 [2024-12-14 17:10:12.748315] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:16.311 [2024-12-14 17:10:12.748439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.878 17:10:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.878 17:10:13 -- common/autotest_common.sh@862 -- # return 0 00:07:16.878 17:10:13 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:16.878 17:10:13 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1195461 00:07:16.878 17:10:13 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1195461 /var/tmp/spdk2.sock 00:07:16.878 17:10:13 -- common/autotest_common.sh@650 -- # local es=0 00:07:16.878 17:10:13 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1195461 /var/tmp/spdk2.sock 00:07:16.878 17:10:13 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:16.878 17:10:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.878 17:10:13 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:16.878 17:10:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.878 17:10:13 -- common/autotest_common.sh@653 -- # waitforlisten 1195461 /var/tmp/spdk2.sock 00:07:16.878 17:10:13 -- common/autotest_common.sh@829 -- # '[' -z 1195461 ']' 00:07:16.878 17:10:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.878 17:10:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.878 17:10:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.878 17:10:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.878 17:10:13 -- common/autotest_common.sh@10 -- # set +x 00:07:16.878 [2024-12-14 17:10:13.481738] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.878 [2024-12-14 17:10:13.481789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195461 ] 00:07:16.878 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.137 [2024-12-14 17:10:13.580064] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1195200 has claimed it. 00:07:17.137 [2024-12-14 17:10:13.580102] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:17.703 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1195461) - No such process 00:07:17.703 ERROR: process (pid: 1195461) is no longer running 00:07:17.703 17:10:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.703 17:10:14 -- common/autotest_common.sh@862 -- # return 1 00:07:17.703 17:10:14 -- common/autotest_common.sh@653 -- # es=1 00:07:17.703 17:10:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.703 17:10:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.703 17:10:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.703 17:10:14 -- event/cpu_locks.sh@122 -- # locks_exist 1195200 00:07:17.703 17:10:14 -- event/cpu_locks.sh@22 -- # lslocks -p 1195200 00:07:17.703 17:10:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.962 lslocks: write error 00:07:17.962 17:10:14 -- event/cpu_locks.sh@124 -- # killprocess 1195200 00:07:17.962 17:10:14 -- common/autotest_common.sh@936 -- # '[' -z 1195200 ']' 00:07:17.962 17:10:14 -- common/autotest_common.sh@940 -- # kill -0 1195200 00:07:17.962 17:10:14 -- common/autotest_common.sh@941 -- # uname 00:07:17.962 17:10:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:17.962 17:10:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1195200 00:07:17.962 17:10:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:17.962 17:10:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:17.962 17:10:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1195200' 00:07:17.962 killing process with pid 1195200 00:07:17.962 17:10:14 -- common/autotest_common.sh@955 -- # kill 1195200 00:07:17.962 17:10:14 -- common/autotest_common.sh@960 -- # wait 1195200 00:07:18.221 00:07:18.221 real 0m2.239s 00:07:18.221 user 0m2.482s 00:07:18.221 sys 0m0.642s 00:07:18.221 17:10:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.221 17:10:14 -- common/autotest_common.sh@10 -- # set +x 00:07:18.221 ************************************ 00:07:18.221 END TEST locking_app_on_locked_coremask 00:07:18.221 ************************************ 00:07:18.221 17:10:14 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:18.221 17:10:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.221 17:10:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.221 17:10:14 -- common/autotest_common.sh@10 -- # set +x 00:07:18.221 ************************************ 00:07:18.221 START TEST locking_overlapped_coremask 00:07:18.221 ************************************ 00:07:18.221 17:10:14 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:07:18.221 17:10:14 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1195760 00:07:18.221 17:10:14 -- event/cpu_locks.sh@133 -- # waitforlisten 1195760 /var/tmp/spdk.sock 00:07:18.221 17:10:14 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:18.221 17:10:14 -- common/autotest_common.sh@829 -- # '[' -z 1195760 ']' 00:07:18.221 17:10:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.221 17:10:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.221 17:10:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.221 17:10:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.221 17:10:14 -- common/autotest_common.sh@10 -- # set +x 00:07:18.479 [2024-12-14 17:10:14.927249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:18.479 [2024-12-14 17:10:14.927310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195760 ] 00:07:18.479 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.479 [2024-12-14 17:10:14.996154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.479 [2024-12-14 17:10:15.034520] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:18.479 [2024-12-14 17:10:15.034672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.479 [2024-12-14 17:10:15.034765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.479 [2024-12-14 17:10:15.034767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.415 17:10:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.415 17:10:15 -- common/autotest_common.sh@862 -- # return 0 00:07:19.415 17:10:15 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1195799 00:07:19.415 17:10:15 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1195799 /var/tmp/spdk2.sock 00:07:19.415 17:10:15 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:19.415 17:10:15 -- common/autotest_common.sh@650 -- # local es=0 00:07:19.415 17:10:15 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1195799 /var/tmp/spdk2.sock 00:07:19.415 17:10:15 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:19.415 17:10:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.415 17:10:15 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:19.415 17:10:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.415 17:10:15 -- common/autotest_common.sh@653 -- # waitforlisten 1195799 /var/tmp/spdk2.sock 00:07:19.415 17:10:15 -- common/autotest_common.sh@829 -- # '[' -z 1195799 ']' 00:07:19.415 17:10:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.415 17:10:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.415 17:10:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.415 17:10:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.415 17:10:15 -- common/autotest_common.sh@10 -- # set +x 00:07:19.415 [2024-12-14 17:10:15.794633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.415 [2024-12-14 17:10:15.794683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195799 ] 00:07:19.415 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.415 [2024-12-14 17:10:15.894076] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1195760 has claimed it. 00:07:19.415 [2024-12-14 17:10:15.894119] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.982 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1195799) - No such process 00:07:19.982 ERROR: process (pid: 1195799) is no longer running 00:07:19.982 17:10:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.982 17:10:16 -- common/autotest_common.sh@862 -- # return 1 00:07:19.982 17:10:16 -- common/autotest_common.sh@653 -- # es=1 00:07:19.982 17:10:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.982 17:10:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.982 17:10:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.982 17:10:16 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:19.982 17:10:16 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.982 17:10:16 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.983 17:10:16 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.983 17:10:16 -- event/cpu_locks.sh@141 -- # killprocess 1195760 00:07:19.983 17:10:16 -- common/autotest_common.sh@936 -- # '[' -z 1195760 ']' 00:07:19.983 17:10:16 -- common/autotest_common.sh@940 -- # kill -0 1195760 00:07:19.983 17:10:16 -- common/autotest_common.sh@941 -- # uname 00:07:19.983 17:10:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:19.983 17:10:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1195760 00:07:19.983 17:10:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:19.983 17:10:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:19.983 17:10:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1195760' 00:07:19.983 killing process with pid 1195760 00:07:19.983 17:10:16 -- common/autotest_common.sh@955 -- # kill 1195760 00:07:19.983 17:10:16 -- common/autotest_common.sh@960 -- # wait 1195760 00:07:20.241 00:07:20.241 real 0m1.920s 00:07:20.241 user 0m5.509s 00:07:20.241 sys 0m0.456s 00:07:20.241 17:10:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.241 17:10:16 -- common/autotest_common.sh@10 -- # set +x 00:07:20.241 ************************************ 00:07:20.241 END TEST locking_overlapped_coremask 00:07:20.241 ************************************ 00:07:20.241 17:10:16 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:20.241 17:10:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:20.241 17:10:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.241 17:10:16 -- common/autotest_common.sh@10 -- # set +x 00:07:20.241 ************************************ 00:07:20.241 START TEST locking_overlapped_coremask_via_rpc 00:07:20.241 ************************************ 00:07:20.241 17:10:16 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:07:20.241 17:10:16 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1196072 00:07:20.241 17:10:16 -- event/cpu_locks.sh@149 -- # waitforlisten 1196072 /var/tmp/spdk.sock 00:07:20.241 17:10:16 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:20.241 17:10:16 -- common/autotest_common.sh@829 -- # '[' -z 1196072 ']' 00:07:20.241 17:10:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.241 17:10:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.241 17:10:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.241 17:10:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.241 17:10:16 -- common/autotest_common.sh@10 -- # set +x 00:07:20.241 [2024-12-14 17:10:16.898258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.241 [2024-12-14 17:10:16.898310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196072 ] 00:07:20.500 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.500 [2024-12-14 17:10:16.966606] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.500 [2024-12-14 17:10:16.966636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.500 [2024-12-14 17:10:17.000520] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:20.500 [2024-12-14 17:10:17.000722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.500 [2024-12-14 17:10:17.000822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.500 [2024-12-14 17:10:17.000825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.067 17:10:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.067 17:10:17 -- common/autotest_common.sh@862 -- # return 0 00:07:21.067 17:10:17 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1196319 00:07:21.067 17:10:17 -- event/cpu_locks.sh@153 -- # waitforlisten 1196319 /var/tmp/spdk2.sock 00:07:21.067 17:10:17 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:21.067 17:10:17 -- common/autotest_common.sh@829 -- # '[' -z 1196319 ']' 00:07:21.067 17:10:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.067 17:10:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.067 17:10:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.067 17:10:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.067 17:10:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.325 [2024-12-14 17:10:17.753022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:21.325 [2024-12-14 17:10:17.753078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196319 ] 00:07:21.325 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.325 [2024-12-14 17:10:17.846922] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.325 [2024-12-14 17:10:17.846954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.325 [2024-12-14 17:10:17.926363] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:21.325 [2024-12-14 17:10:17.926529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.325 [2024-12-14 17:10:17.926647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.325 [2024-12-14 17:10:17.926649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.894 17:10:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.894 17:10:18 -- common/autotest_common.sh@862 -- # return 0 00:07:21.894 17:10:18 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:21.894 17:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.894 17:10:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.894 17:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.894 17:10:18 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.894 17:10:18 -- common/autotest_common.sh@650 -- # local es=0 00:07:21.894 17:10:18 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.894 17:10:18 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:21.894 17:10:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.894 17:10:18 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:21.894 17:10:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.894 17:10:18 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.894 17:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.894 17:10:18 -- common/autotest_common.sh@10 -- # set +x 00:07:22.153 [2024-12-14 17:10:18.580569] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1196072 has claimed it. 00:07:22.153 request: 00:07:22.153 { 00:07:22.153 "method": "framework_enable_cpumask_locks", 00:07:22.153 "req_id": 1 00:07:22.153 } 00:07:22.153 Got JSON-RPC error response 00:07:22.153 response: 00:07:22.153 { 00:07:22.153 "code": -32603, 00:07:22.153 "message": "Failed to claim CPU core: 2" 00:07:22.153 } 00:07:22.153 17:10:18 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:22.153 17:10:18 -- common/autotest_common.sh@653 -- # es=1 00:07:22.153 17:10:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.153 17:10:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.153 17:10:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.153 17:10:18 -- event/cpu_locks.sh@158 -- # waitforlisten 1196072 /var/tmp/spdk.sock 00:07:22.153 17:10:18 -- common/autotest_common.sh@829 -- # '[' -z 1196072 ']' 00:07:22.153 17:10:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.153 17:10:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.153 17:10:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.153 17:10:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.153 17:10:18 -- common/autotest_common.sh@10 -- # set +x 00:07:22.153 17:10:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.153 17:10:18 -- common/autotest_common.sh@862 -- # return 0 00:07:22.153 17:10:18 -- event/cpu_locks.sh@159 -- # waitforlisten 1196319 /var/tmp/spdk2.sock 00:07:22.153 17:10:18 -- common/autotest_common.sh@829 -- # '[' -z 1196319 ']' 00:07:22.153 17:10:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.153 17:10:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.153 17:10:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.153 17:10:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.153 17:10:18 -- common/autotest_common.sh@10 -- # set +x 00:07:22.412 17:10:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.412 17:10:18 -- common/autotest_common.sh@862 -- # return 0 00:07:22.412 17:10:18 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:22.412 17:10:18 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:22.412 17:10:18 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:22.412 17:10:18 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:22.412 00:07:22.412 real 0m2.130s 00:07:22.412 user 0m0.861s 00:07:22.412 sys 0m0.202s 00:07:22.412 17:10:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.412 17:10:18 -- common/autotest_common.sh@10 -- # set +x 00:07:22.412 ************************************ 00:07:22.412 END TEST locking_overlapped_coremask_via_rpc 00:07:22.412 ************************************ 00:07:22.412 17:10:19 -- event/cpu_locks.sh@174 -- # cleanup 00:07:22.412 17:10:19 -- event/cpu_locks.sh@15 -- # [[ -z 1196072 ]] 00:07:22.412 17:10:19 -- event/cpu_locks.sh@15 -- # killprocess 1196072 00:07:22.412 17:10:19 -- common/autotest_common.sh@936 -- # '[' -z 1196072 ']' 00:07:22.412 17:10:19 -- common/autotest_common.sh@940 -- # kill -0 1196072 00:07:22.412 17:10:19 -- common/autotest_common.sh@941 -- # uname 00:07:22.412 17:10:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:22.412 17:10:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1196072 00:07:22.412 17:10:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:22.412 17:10:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:22.412 17:10:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1196072' 00:07:22.412 killing process with pid 1196072 00:07:22.412 17:10:19 -- common/autotest_common.sh@955 -- # kill 1196072 00:07:22.412 17:10:19 -- common/autotest_common.sh@960 -- # wait 1196072 00:07:22.981 17:10:19 -- event/cpu_locks.sh@16 -- # [[ -z 1196319 ]] 00:07:22.981 17:10:19 -- event/cpu_locks.sh@16 -- # killprocess 1196319 00:07:22.981 17:10:19 -- common/autotest_common.sh@936 -- # '[' -z 1196319 ']' 00:07:22.981 17:10:19 -- common/autotest_common.sh@940 -- # kill -0 1196319 00:07:22.981 17:10:19 -- common/autotest_common.sh@941 -- # uname 00:07:22.981 17:10:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:22.981 17:10:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1196319 00:07:22.981 17:10:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:22.981 17:10:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:22.981 17:10:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1196319' 00:07:22.981 killing process with pid 1196319 00:07:22.981 17:10:19 -- common/autotest_common.sh@955 -- # kill 1196319 00:07:22.981 17:10:19 -- common/autotest_common.sh@960 -- # wait 1196319 00:07:23.241 17:10:19 -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.241 17:10:19 -- event/cpu_locks.sh@1 -- # cleanup 00:07:23.241 17:10:19 -- event/cpu_locks.sh@15 -- # [[ -z 1196072 ]] 00:07:23.241 17:10:19 -- event/cpu_locks.sh@15 -- # killprocess 1196072 00:07:23.241 17:10:19 -- common/autotest_common.sh@936 -- # '[' -z 1196072 ']' 00:07:23.241 17:10:19 -- common/autotest_common.sh@940 -- # kill -0 1196072 00:07:23.241 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1196072) - No such process 00:07:23.241 17:10:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1196072 is not found' 00:07:23.241 Process with pid 1196072 is not found 00:07:23.241 17:10:19 -- event/cpu_locks.sh@16 -- # [[ -z 1196319 ]] 00:07:23.241 17:10:19 -- event/cpu_locks.sh@16 -- # killprocess 1196319 00:07:23.241 17:10:19 -- common/autotest_common.sh@936 -- # '[' -z 1196319 ']' 00:07:23.241 17:10:19 -- common/autotest_common.sh@940 -- # kill -0 1196319 00:07:23.241 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1196319) - No such process 00:07:23.241 17:10:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1196319 is not found' 00:07:23.241 Process with pid 1196319 is not found 00:07:23.241 17:10:19 -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.241 00:07:23.241 real 0m18.282s 00:07:23.241 user 0m31.241s 00:07:23.241 sys 0m5.845s 00:07:23.241 17:10:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.241 17:10:19 -- common/autotest_common.sh@10 -- # set +x 00:07:23.241 ************************************ 00:07:23.241 END TEST cpu_locks 00:07:23.241 ************************************ 00:07:23.241 00:07:23.241 real 0m43.956s 00:07:23.241 user 1m23.617s 00:07:23.241 sys 0m9.838s 00:07:23.241 17:10:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.241 17:10:19 -- common/autotest_common.sh@10 -- # set +x 00:07:23.241 ************************************ 00:07:23.241 END TEST event 00:07:23.241 ************************************ 00:07:23.241 17:10:19 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:23.241 17:10:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.241 17:10:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.241 17:10:19 -- common/autotest_common.sh@10 -- # set +x 00:07:23.241 ************************************ 00:07:23.241 START TEST thread 00:07:23.241 ************************************ 00:07:23.241 17:10:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:23.501 * Looking for test storage... 00:07:23.501 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:23.501 17:10:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:23.501 17:10:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:23.501 17:10:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:23.501 17:10:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:23.501 17:10:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:23.501 17:10:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:23.501 17:10:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:23.501 17:10:20 -- scripts/common.sh@335 -- # IFS=.-: 00:07:23.501 17:10:20 -- scripts/common.sh@335 -- # read -ra ver1 00:07:23.501 17:10:20 -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.501 17:10:20 -- scripts/common.sh@336 -- # read -ra ver2 00:07:23.501 17:10:20 -- scripts/common.sh@337 -- # local 'op=<' 00:07:23.501 17:10:20 -- scripts/common.sh@339 -- # ver1_l=2 00:07:23.501 17:10:20 -- scripts/common.sh@340 -- # ver2_l=1 00:07:23.501 17:10:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:23.501 17:10:20 -- scripts/common.sh@343 -- # case "$op" in 00:07:23.501 17:10:20 -- scripts/common.sh@344 -- # : 1 00:07:23.501 17:10:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:23.501 17:10:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.501 17:10:20 -- scripts/common.sh@364 -- # decimal 1 00:07:23.501 17:10:20 -- scripts/common.sh@352 -- # local d=1 00:07:23.501 17:10:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.501 17:10:20 -- scripts/common.sh@354 -- # echo 1 00:07:23.501 17:10:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:23.501 17:10:20 -- scripts/common.sh@365 -- # decimal 2 00:07:23.501 17:10:20 -- scripts/common.sh@352 -- # local d=2 00:07:23.501 17:10:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.501 17:10:20 -- scripts/common.sh@354 -- # echo 2 00:07:23.501 17:10:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:23.501 17:10:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:23.501 17:10:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:23.501 17:10:20 -- scripts/common.sh@367 -- # return 0 00:07:23.501 17:10:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.501 17:10:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:23.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.501 --rc genhtml_branch_coverage=1 00:07:23.501 --rc genhtml_function_coverage=1 00:07:23.501 --rc genhtml_legend=1 00:07:23.501 --rc geninfo_all_blocks=1 00:07:23.501 --rc geninfo_unexecuted_blocks=1 00:07:23.501 00:07:23.501 ' 00:07:23.501 17:10:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:23.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.501 --rc genhtml_branch_coverage=1 00:07:23.501 --rc genhtml_function_coverage=1 00:07:23.501 --rc genhtml_legend=1 00:07:23.501 --rc geninfo_all_blocks=1 00:07:23.501 --rc geninfo_unexecuted_blocks=1 00:07:23.501 00:07:23.501 ' 00:07:23.501 17:10:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:23.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.501 --rc genhtml_branch_coverage=1 00:07:23.501 --rc genhtml_function_coverage=1 00:07:23.501 --rc genhtml_legend=1 00:07:23.501 --rc geninfo_all_blocks=1 00:07:23.501 --rc geninfo_unexecuted_blocks=1 00:07:23.501 00:07:23.501 ' 00:07:23.501 17:10:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:23.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.501 --rc genhtml_branch_coverage=1 00:07:23.501 --rc genhtml_function_coverage=1 00:07:23.501 --rc genhtml_legend=1 00:07:23.501 --rc geninfo_all_blocks=1 00:07:23.501 --rc geninfo_unexecuted_blocks=1 00:07:23.501 00:07:23.501 ' 00:07:23.501 17:10:20 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.501 17:10:20 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:23.501 17:10:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.501 17:10:20 -- common/autotest_common.sh@10 -- # set +x 00:07:23.501 ************************************ 00:07:23.501 START TEST thread_poller_perf 00:07:23.501 ************************************ 00:07:23.501 17:10:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.501 [2024-12-14 17:10:20.061030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.501 [2024-12-14 17:10:20.061119] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196725 ] 00:07:23.501 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.501 [2024-12-14 17:10:20.133971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.501 [2024-12-14 17:10:20.170153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.501 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:24.878 [2024-12-14T16:10:21.562Z] ====================================== 00:07:24.878 [2024-12-14T16:10:21.562Z] busy:2511436274 (cyc) 00:07:24.879 [2024-12-14T16:10:21.563Z] total_run_count: 403000 00:07:24.879 [2024-12-14T16:10:21.563Z] tsc_hz: 2500000000 (cyc) 00:07:24.879 [2024-12-14T16:10:21.563Z] ====================================== 00:07:24.879 [2024-12-14T16:10:21.563Z] poller_cost: 6231 (cyc), 2492 (nsec) 00:07:24.879 00:07:24.879 real 0m1.193s 00:07:24.879 user 0m1.093s 00:07:24.879 sys 0m0.094s 00:07:24.879 17:10:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.879 17:10:21 -- common/autotest_common.sh@10 -- # set +x 00:07:24.879 ************************************ 00:07:24.879 END TEST thread_poller_perf 00:07:24.879 ************************************ 00:07:24.879 17:10:21 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.879 17:10:21 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:24.879 17:10:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.879 17:10:21 -- common/autotest_common.sh@10 -- # set +x 00:07:24.879 ************************************ 00:07:24.879 START TEST thread_poller_perf 00:07:24.879 ************************************ 00:07:24.879 17:10:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.879 [2024-12-14 17:10:21.301559] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:24.879 [2024-12-14 17:10:21.301647] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197006 ] 00:07:24.879 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.879 [2024-12-14 17:10:21.371955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.879 [2024-12-14 17:10:21.406157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.879 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:25.817 [2024-12-14T16:10:22.501Z] ====================================== 00:07:25.817 [2024-12-14T16:10:22.501Z] busy:2502256320 (cyc) 00:07:25.817 [2024-12-14T16:10:22.501Z] total_run_count: 5581000 00:07:25.817 [2024-12-14T16:10:22.501Z] tsc_hz: 2500000000 (cyc) 00:07:25.817 [2024-12-14T16:10:22.501Z] ====================================== 00:07:25.817 [2024-12-14T16:10:22.501Z] poller_cost: 448 (cyc), 179 (nsec) 00:07:25.817 00:07:25.817 real 0m1.183s 00:07:25.817 user 0m1.099s 00:07:25.817 sys 0m0.080s 00:07:25.817 17:10:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.817 17:10:22 -- common/autotest_common.sh@10 -- # set +x 00:07:25.817 ************************************ 00:07:25.817 END TEST thread_poller_perf 00:07:25.817 ************************************ 00:07:26.076 17:10:22 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:26.076 00:07:26.076 real 0m2.658s 00:07:26.076 user 0m2.318s 00:07:26.076 sys 0m0.364s 00:07:26.076 17:10:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.076 17:10:22 -- common/autotest_common.sh@10 -- # set +x 00:07:26.076 ************************************ 00:07:26.076 END TEST thread 00:07:26.076 ************************************ 00:07:26.076 17:10:22 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:26.076 17:10:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:26.076 17:10:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.076 17:10:22 -- common/autotest_common.sh@10 -- # set +x 00:07:26.076 ************************************ 00:07:26.076 START TEST accel 00:07:26.076 ************************************ 00:07:26.076 17:10:22 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:26.076 * Looking for test storage... 00:07:26.076 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:26.076 17:10:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:26.076 17:10:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:26.076 17:10:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.076 17:10:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:26.076 17:10:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:26.076 17:10:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:26.076 17:10:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:26.076 17:10:22 -- scripts/common.sh@335 -- # IFS=.-: 00:07:26.076 17:10:22 -- scripts/common.sh@335 -- # read -ra ver1 00:07:26.077 17:10:22 -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.077 17:10:22 -- scripts/common.sh@336 -- # read -ra ver2 00:07:26.077 17:10:22 -- scripts/common.sh@337 -- # local 'op=<' 00:07:26.077 17:10:22 -- scripts/common.sh@339 -- # ver1_l=2 00:07:26.077 17:10:22 -- scripts/common.sh@340 -- # ver2_l=1 00:07:26.077 17:10:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:26.077 17:10:22 -- scripts/common.sh@343 -- # case "$op" in 00:07:26.077 17:10:22 -- scripts/common.sh@344 -- # : 1 00:07:26.077 17:10:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:26.077 17:10:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.077 17:10:22 -- scripts/common.sh@364 -- # decimal 1 00:07:26.077 17:10:22 -- scripts/common.sh@352 -- # local d=1 00:07:26.077 17:10:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.077 17:10:22 -- scripts/common.sh@354 -- # echo 1 00:07:26.077 17:10:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:26.077 17:10:22 -- scripts/common.sh@365 -- # decimal 2 00:07:26.077 17:10:22 -- scripts/common.sh@352 -- # local d=2 00:07:26.077 17:10:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.077 17:10:22 -- scripts/common.sh@354 -- # echo 2 00:07:26.077 17:10:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:26.077 17:10:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:26.077 17:10:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:26.077 17:10:22 -- scripts/common.sh@367 -- # return 0 00:07:26.077 17:10:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.077 17:10:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:26.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.077 --rc genhtml_branch_coverage=1 00:07:26.077 --rc genhtml_function_coverage=1 00:07:26.077 --rc genhtml_legend=1 00:07:26.077 --rc geninfo_all_blocks=1 00:07:26.077 --rc geninfo_unexecuted_blocks=1 00:07:26.077 00:07:26.077 ' 00:07:26.077 17:10:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:26.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.077 --rc genhtml_branch_coverage=1 00:07:26.077 --rc genhtml_function_coverage=1 00:07:26.077 --rc genhtml_legend=1 00:07:26.077 --rc geninfo_all_blocks=1 00:07:26.077 --rc geninfo_unexecuted_blocks=1 00:07:26.077 00:07:26.077 ' 00:07:26.077 17:10:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:26.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.077 --rc genhtml_branch_coverage=1 00:07:26.077 --rc genhtml_function_coverage=1 00:07:26.077 --rc genhtml_legend=1 00:07:26.077 --rc geninfo_all_blocks=1 00:07:26.077 --rc geninfo_unexecuted_blocks=1 00:07:26.077 00:07:26.077 ' 00:07:26.077 17:10:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:26.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.077 --rc genhtml_branch_coverage=1 00:07:26.077 --rc genhtml_function_coverage=1 00:07:26.077 --rc genhtml_legend=1 00:07:26.077 --rc geninfo_all_blocks=1 00:07:26.077 --rc geninfo_unexecuted_blocks=1 00:07:26.077 00:07:26.077 ' 00:07:26.077 17:10:22 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:26.077 17:10:22 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:26.077 17:10:22 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:26.077 17:10:22 -- accel/accel.sh@59 -- # spdk_tgt_pid=1197337 00:07:26.077 17:10:22 -- accel/accel.sh@60 -- # waitforlisten 1197337 00:07:26.077 17:10:22 -- common/autotest_common.sh@829 -- # '[' -z 1197337 ']' 00:07:26.077 17:10:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.077 17:10:22 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:26.077 17:10:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.077 17:10:22 -- accel/accel.sh@58 -- # build_accel_config 00:07:26.077 17:10:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.077 17:10:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.077 17:10:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.077 17:10:22 -- common/autotest_common.sh@10 -- # set +x 00:07:26.077 17:10:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.077 17:10:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.077 17:10:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.077 17:10:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.077 17:10:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.077 17:10:22 -- accel/accel.sh@42 -- # jq -r . 00:07:26.336 [2024-12-14 17:10:22.766490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:26.336 [2024-12-14 17:10:22.766549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197337 ] 00:07:26.336 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.336 [2024-12-14 17:10:22.835672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.336 [2024-12-14 17:10:22.870897] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:26.336 [2024-12-14 17:10:22.871017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.903 17:10:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.903 17:10:23 -- common/autotest_common.sh@862 -- # return 0 00:07:26.903 17:10:23 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:26.903 17:10:23 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:26.903 17:10:23 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:26.903 17:10:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.903 17:10:23 -- common/autotest_common.sh@10 -- # set +x 00:07:26.903 17:10:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.162 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.162 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.162 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.162 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.162 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.162 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.162 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.162 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.162 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.162 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.162 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.162 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.162 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.162 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.163 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.163 17:10:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.163 17:10:23 -- accel/accel.sh@64 -- # IFS== 00:07:27.163 17:10:23 -- accel/accel.sh@64 -- # read -r opc module 00:07:27.163 17:10:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:27.163 17:10:23 -- accel/accel.sh@67 -- # killprocess 1197337 00:07:27.163 17:10:23 -- common/autotest_common.sh@936 -- # '[' -z 1197337 ']' 00:07:27.163 17:10:23 -- common/autotest_common.sh@940 -- # kill -0 1197337 00:07:27.163 17:10:23 -- common/autotest_common.sh@941 -- # uname 00:07:27.163 17:10:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:27.163 17:10:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1197337 00:07:27.163 17:10:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:27.163 17:10:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:27.163 17:10:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1197337' 00:07:27.163 killing process with pid 1197337 00:07:27.163 17:10:23 -- common/autotest_common.sh@955 -- # kill 1197337 00:07:27.163 17:10:23 -- common/autotest_common.sh@960 -- # wait 1197337 00:07:27.422 17:10:23 -- accel/accel.sh@68 -- # trap - ERR 00:07:27.422 17:10:23 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:27.422 17:10:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:27.422 17:10:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.422 17:10:23 -- common/autotest_common.sh@10 -- # set +x 00:07:27.422 17:10:23 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:07:27.422 17:10:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:27.422 17:10:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.422 17:10:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.422 17:10:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.422 17:10:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.422 17:10:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.422 17:10:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.422 17:10:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.422 17:10:23 -- accel/accel.sh@42 -- # jq -r . 00:07:27.422 17:10:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.422 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:27.422 17:10:24 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:27.422 17:10:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:27.422 17:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.422 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:27.422 ************************************ 00:07:27.422 START TEST accel_missing_filename 00:07:27.422 ************************************ 00:07:27.422 17:10:24 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:07:27.422 17:10:24 -- common/autotest_common.sh@650 -- # local es=0 00:07:27.422 17:10:24 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:27.422 17:10:24 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:27.422 17:10:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.422 17:10:24 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:27.422 17:10:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.422 17:10:24 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:27.422 17:10:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:27.422 17:10:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.422 17:10:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.422 17:10:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.422 17:10:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.422 17:10:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.422 17:10:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.422 17:10:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.422 17:10:24 -- accel/accel.sh@42 -- # jq -r . 00:07:27.422 [2024-12-14 17:10:24.074598] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.422 [2024-12-14 17:10:24.074668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197641 ] 00:07:27.680 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.680 [2024-12-14 17:10:24.144818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.680 [2024-12-14 17:10:24.179833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.680 [2024-12-14 17:10:24.220383] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.680 [2024-12-14 17:10:24.280440] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:27.680 A filename is required. 00:07:27.680 17:10:24 -- common/autotest_common.sh@653 -- # es=234 00:07:27.680 17:10:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.680 17:10:24 -- common/autotest_common.sh@662 -- # es=106 00:07:27.680 17:10:24 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:27.680 17:10:24 -- common/autotest_common.sh@670 -- # es=1 00:07:27.680 17:10:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.680 00:07:27.680 real 0m0.294s 00:07:27.680 user 0m0.200s 00:07:27.680 sys 0m0.131s 00:07:27.680 17:10:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.680 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:27.680 ************************************ 00:07:27.680 END TEST accel_missing_filename 00:07:27.680 ************************************ 00:07:27.939 17:10:24 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:27.939 17:10:24 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:27.939 17:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.939 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:27.939 ************************************ 00:07:27.939 START TEST accel_compress_verify 00:07:27.939 ************************************ 00:07:27.939 17:10:24 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:27.939 17:10:24 -- common/autotest_common.sh@650 -- # local es=0 00:07:27.939 17:10:24 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:27.939 17:10:24 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:27.939 17:10:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.939 17:10:24 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:27.939 17:10:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.939 17:10:24 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:27.939 17:10:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:27.939 17:10:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.939 17:10:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.939 17:10:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.939 17:10:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.939 17:10:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.939 17:10:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.939 17:10:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.939 17:10:24 -- accel/accel.sh@42 -- # jq -r . 00:07:27.939 [2024-12-14 17:10:24.407486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.939 [2024-12-14 17:10:24.407583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197664 ] 00:07:27.939 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.939 [2024-12-14 17:10:24.478065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.939 [2024-12-14 17:10:24.513398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.939 [2024-12-14 17:10:24.554232] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.939 [2024-12-14 17:10:24.613996] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:28.199 00:07:28.199 Compression does not support the verify option, aborting. 00:07:28.199 17:10:24 -- common/autotest_common.sh@653 -- # es=161 00:07:28.199 17:10:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.199 17:10:24 -- common/autotest_common.sh@662 -- # es=33 00:07:28.199 17:10:24 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:28.199 17:10:24 -- common/autotest_common.sh@670 -- # es=1 00:07:28.199 17:10:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.199 00:07:28.199 real 0m0.296s 00:07:28.199 user 0m0.207s 00:07:28.199 sys 0m0.129s 00:07:28.199 17:10:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.199 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:28.199 ************************************ 00:07:28.199 END TEST accel_compress_verify 00:07:28.199 ************************************ 00:07:28.199 17:10:24 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:28.199 17:10:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:28.199 17:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.199 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:28.199 ************************************ 00:07:28.199 START TEST accel_wrong_workload 00:07:28.199 ************************************ 00:07:28.199 17:10:24 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:07:28.199 17:10:24 -- common/autotest_common.sh@650 -- # local es=0 00:07:28.199 17:10:24 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:28.199 17:10:24 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:28.199 17:10:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.199 17:10:24 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:28.199 17:10:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.199 17:10:24 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:28.199 17:10:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:28.199 17:10:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.199 17:10:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.199 17:10:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.199 17:10:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.199 17:10:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.199 17:10:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.199 17:10:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.199 17:10:24 -- accel/accel.sh@42 -- # jq -r . 00:07:28.199 Unsupported workload type: foobar 00:07:28.199 [2024-12-14 17:10:24.739530] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:28.199 accel_perf options: 00:07:28.199 [-h help message] 00:07:28.199 [-q queue depth per core] 00:07:28.199 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:28.199 [-T number of threads per core 00:07:28.199 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:28.199 [-t time in seconds] 00:07:28.199 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:28.199 [ dif_verify, , dif_generate, dif_generate_copy 00:07:28.199 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:28.199 [-l for compress/decompress workloads, name of uncompressed input file 00:07:28.199 [-S for crc32c workload, use this seed value (default 0) 00:07:28.199 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:28.199 [-f for fill workload, use this BYTE value (default 255) 00:07:28.199 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:28.199 [-y verify result if this switch is on] 00:07:28.199 [-a tasks to allocate per core (default: same value as -q)] 00:07:28.199 Can be used to spread operations across a wider range of memory. 00:07:28.199 17:10:24 -- common/autotest_common.sh@653 -- # es=1 00:07:28.199 17:10:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.199 17:10:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.199 17:10:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.199 00:07:28.199 real 0m0.033s 00:07:28.199 user 0m0.017s 00:07:28.199 sys 0m0.016s 00:07:28.199 17:10:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.199 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:28.199 ************************************ 00:07:28.199 END TEST accel_wrong_workload 00:07:28.199 ************************************ 00:07:28.199 Error: writing output failed: Broken pipe 00:07:28.199 17:10:24 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:28.199 17:10:24 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:28.199 17:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.199 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:28.199 ************************************ 00:07:28.199 START TEST accel_negative_buffers 00:07:28.199 ************************************ 00:07:28.199 17:10:24 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:28.199 17:10:24 -- common/autotest_common.sh@650 -- # local es=0 00:07:28.199 17:10:24 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:28.199 17:10:24 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:28.199 17:10:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.199 17:10:24 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:28.199 17:10:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.199 17:10:24 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:28.199 17:10:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:28.199 17:10:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.199 17:10:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.199 17:10:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.199 17:10:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.199 17:10:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.199 17:10:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.199 17:10:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.199 17:10:24 -- accel/accel.sh@42 -- # jq -r . 00:07:28.199 -x option must be non-negative. 00:07:28.199 [2024-12-14 17:10:24.819811] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:28.199 accel_perf options: 00:07:28.199 [-h help message] 00:07:28.199 [-q queue depth per core] 00:07:28.199 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:28.199 [-T number of threads per core 00:07:28.199 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:28.199 [-t time in seconds] 00:07:28.199 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:28.199 [ dif_verify, , dif_generate, dif_generate_copy 00:07:28.199 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:28.199 [-l for compress/decompress workloads, name of uncompressed input file 00:07:28.199 [-S for crc32c workload, use this seed value (default 0) 00:07:28.199 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:28.199 [-f for fill workload, use this BYTE value (default 255) 00:07:28.199 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:28.199 [-y verify result if this switch is on] 00:07:28.199 [-a tasks to allocate per core (default: same value as -q)] 00:07:28.199 Can be used to spread operations across a wider range of memory. 00:07:28.199 17:10:24 -- common/autotest_common.sh@653 -- # es=1 00:07:28.199 17:10:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.199 17:10:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.199 17:10:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.199 00:07:28.199 real 0m0.036s 00:07:28.199 user 0m0.018s 00:07:28.199 sys 0m0.018s 00:07:28.199 17:10:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.199 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:28.199 ************************************ 00:07:28.199 END TEST accel_negative_buffers 00:07:28.199 ************************************ 00:07:28.199 Error: writing output failed: Broken pipe 00:07:28.199 17:10:24 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:28.199 17:10:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:28.199 17:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.199 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:07:28.199 ************************************ 00:07:28.199 START TEST accel_crc32c 00:07:28.199 ************************************ 00:07:28.199 17:10:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:28.199 17:10:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.199 17:10:24 -- accel/accel.sh@17 -- # local accel_module 00:07:28.199 17:10:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:28.199 17:10:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:28.199 17:10:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.199 17:10:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.199 17:10:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.199 17:10:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.200 17:10:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.200 17:10:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.200 17:10:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.200 17:10:24 -- accel/accel.sh@42 -- # jq -r . 00:07:28.458 [2024-12-14 17:10:24.896051] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.458 [2024-12-14 17:10:24.896110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197726 ] 00:07:28.458 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.458 [2024-12-14 17:10:24.965748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.458 [2024-12-14 17:10:25.001670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.835 17:10:26 -- accel/accel.sh@18 -- # out=' 00:07:29.835 SPDK Configuration: 00:07:29.835 Core mask: 0x1 00:07:29.835 00:07:29.835 Accel Perf Configuration: 00:07:29.835 Workload Type: crc32c 00:07:29.835 CRC-32C seed: 32 00:07:29.835 Transfer size: 4096 bytes 00:07:29.835 Vector count 1 00:07:29.835 Module: software 00:07:29.835 Queue depth: 32 00:07:29.835 Allocate depth: 32 00:07:29.835 # threads/core: 1 00:07:29.835 Run time: 1 seconds 00:07:29.835 Verify: Yes 00:07:29.835 00:07:29.835 Running for 1 seconds... 00:07:29.835 00:07:29.835 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.835 ------------------------------------------------------------------------------------ 00:07:29.835 0,0 598400/s 2337 MiB/s 0 0 00:07:29.835 ==================================================================================== 00:07:29.835 Total 598400/s 2337 MiB/s 0 0' 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:29.835 17:10:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:29.835 17:10:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.835 17:10:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.835 17:10:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.835 17:10:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.835 17:10:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.835 17:10:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.835 17:10:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.835 17:10:26 -- accel/accel.sh@42 -- # jq -r . 00:07:29.835 [2024-12-14 17:10:26.180567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:29.835 [2024-12-14 17:10:26.180624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197992 ] 00:07:29.835 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.835 [2024-12-14 17:10:26.247843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.835 [2024-12-14 17:10:26.282406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val= 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val= 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val=0x1 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val= 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val= 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val=crc32c 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val=32 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val= 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val=software 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val=32 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val=32 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val=1 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val=Yes 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val= 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:29.835 17:10:26 -- accel/accel.sh@21 -- # val= 00:07:29.835 17:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:29.835 17:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:30.772 17:10:27 -- accel/accel.sh@21 -- # val= 00:07:30.772 17:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.772 17:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.772 17:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.772 17:10:27 -- accel/accel.sh@21 -- # val= 00:07:30.772 17:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.772 17:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.772 17:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.772 17:10:27 -- accel/accel.sh@21 -- # val= 00:07:30.772 17:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.772 17:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.772 17:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.772 17:10:27 -- accel/accel.sh@21 -- # val= 00:07:30.772 17:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.772 17:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.772 17:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.772 17:10:27 -- accel/accel.sh@21 -- # val= 00:07:30.772 17:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.772 17:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.772 17:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.772 17:10:27 -- accel/accel.sh@21 -- # val= 00:07:30.772 17:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.772 17:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.772 17:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.772 17:10:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.772 17:10:27 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:30.772 17:10:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.772 00:07:30.772 real 0m2.578s 00:07:30.772 user 0m2.337s 00:07:30.772 sys 0m0.238s 00:07:30.772 17:10:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.772 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:07:30.772 ************************************ 00:07:30.772 END TEST accel_crc32c 00:07:30.772 ************************************ 00:07:31.031 17:10:27 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:31.031 17:10:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:31.031 17:10:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.031 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:07:31.031 ************************************ 00:07:31.031 START TEST accel_crc32c_C2 00:07:31.031 ************************************ 00:07:31.031 17:10:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:31.031 17:10:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.031 17:10:27 -- accel/accel.sh@17 -- # local accel_module 00:07:31.031 17:10:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:31.031 17:10:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:31.031 17:10:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.031 17:10:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.031 17:10:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.031 17:10:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.031 17:10:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.031 17:10:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.031 17:10:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.031 17:10:27 -- accel/accel.sh@42 -- # jq -r . 00:07:31.031 [2024-12-14 17:10:27.514137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.031 [2024-12-14 17:10:27.514201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198279 ] 00:07:31.031 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.031 [2024-12-14 17:10:27.582003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.031 [2024-12-14 17:10:27.616617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.410 17:10:28 -- accel/accel.sh@18 -- # out=' 00:07:32.410 SPDK Configuration: 00:07:32.410 Core mask: 0x1 00:07:32.410 00:07:32.410 Accel Perf Configuration: 00:07:32.410 Workload Type: crc32c 00:07:32.410 CRC-32C seed: 0 00:07:32.410 Transfer size: 4096 bytes 00:07:32.410 Vector count 2 00:07:32.410 Module: software 00:07:32.410 Queue depth: 32 00:07:32.410 Allocate depth: 32 00:07:32.410 # threads/core: 1 00:07:32.410 Run time: 1 seconds 00:07:32.410 Verify: Yes 00:07:32.410 00:07:32.410 Running for 1 seconds... 00:07:32.410 00:07:32.410 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.410 ------------------------------------------------------------------------------------ 00:07:32.410 0,0 479328/s 3744 MiB/s 0 0 00:07:32.410 ==================================================================================== 00:07:32.410 Total 479328/s 1872 MiB/s 0 0' 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:32.410 17:10:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:32.410 17:10:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.410 17:10:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.410 17:10:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.410 17:10:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.410 17:10:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.410 17:10:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.410 17:10:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.410 17:10:28 -- accel/accel.sh@42 -- # jq -r . 00:07:32.410 [2024-12-14 17:10:28.795705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.410 [2024-12-14 17:10:28.795761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198545 ] 00:07:32.410 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.410 [2024-12-14 17:10:28.863675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.410 [2024-12-14 17:10:28.897347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val= 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val= 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val=0x1 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val= 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val= 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val=crc32c 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val=0 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val= 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val=software 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val=32 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val=32 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val=1 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.410 17:10:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.410 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.410 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.411 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.411 17:10:28 -- accel/accel.sh@21 -- # val=Yes 00:07:32.411 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.411 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.411 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.411 17:10:28 -- accel/accel.sh@21 -- # val= 00:07:32.411 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.411 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.411 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:32.411 17:10:28 -- accel/accel.sh@21 -- # val= 00:07:32.411 17:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.411 17:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:32.411 17:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:33.788 17:10:30 -- accel/accel.sh@21 -- # val= 00:07:33.788 17:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.788 17:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.788 17:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.788 17:10:30 -- accel/accel.sh@21 -- # val= 00:07:33.788 17:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.788 17:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.788 17:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.788 17:10:30 -- accel/accel.sh@21 -- # val= 00:07:33.788 17:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.788 17:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.788 17:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.788 17:10:30 -- accel/accel.sh@21 -- # val= 00:07:33.788 17:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.788 17:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.788 17:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.788 17:10:30 -- accel/accel.sh@21 -- # val= 00:07:33.788 17:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.788 17:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.788 17:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.788 17:10:30 -- accel/accel.sh@21 -- # val= 00:07:33.788 17:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.788 17:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.788 17:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.788 17:10:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.788 17:10:30 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:33.789 17:10:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.789 00:07:33.789 real 0m2.577s 00:07:33.789 user 0m2.325s 00:07:33.789 sys 0m0.248s 00:07:33.789 17:10:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.789 17:10:30 -- common/autotest_common.sh@10 -- # set +x 00:07:33.789 ************************************ 00:07:33.789 END TEST accel_crc32c_C2 00:07:33.789 ************************************ 00:07:33.789 17:10:30 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:33.789 17:10:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:33.789 17:10:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.789 17:10:30 -- common/autotest_common.sh@10 -- # set +x 00:07:33.789 ************************************ 00:07:33.789 START TEST accel_copy 00:07:33.789 ************************************ 00:07:33.789 17:10:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:07:33.789 17:10:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.789 17:10:30 -- accel/accel.sh@17 -- # local accel_module 00:07:33.789 17:10:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:33.789 17:10:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:33.789 17:10:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.789 17:10:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.789 17:10:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.789 17:10:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.789 17:10:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.789 17:10:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.789 17:10:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.789 17:10:30 -- accel/accel.sh@42 -- # jq -r . 00:07:33.789 [2024-12-14 17:10:30.133151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:33.789 [2024-12-14 17:10:30.133221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198836 ] 00:07:33.789 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.789 [2024-12-14 17:10:30.202850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.789 [2024-12-14 17:10:30.238607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.728 17:10:31 -- accel/accel.sh@18 -- # out=' 00:07:34.728 SPDK Configuration: 00:07:34.728 Core mask: 0x1 00:07:34.728 00:07:34.728 Accel Perf Configuration: 00:07:34.728 Workload Type: copy 00:07:34.728 Transfer size: 4096 bytes 00:07:34.728 Vector count 1 00:07:34.728 Module: software 00:07:34.728 Queue depth: 32 00:07:34.728 Allocate depth: 32 00:07:34.728 # threads/core: 1 00:07:34.728 Run time: 1 seconds 00:07:34.728 Verify: Yes 00:07:34.728 00:07:34.728 Running for 1 seconds... 00:07:34.728 00:07:34.728 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.728 ------------------------------------------------------------------------------------ 00:07:34.728 0,0 437696/s 1709 MiB/s 0 0 00:07:34.728 ==================================================================================== 00:07:34.728 Total 437696/s 1709 MiB/s 0 0' 00:07:34.728 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.728 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.728 17:10:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:34.728 17:10:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:34.728 17:10:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.728 17:10:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.728 17:10:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.728 17:10:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.728 17:10:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.728 17:10:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.728 17:10:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.728 17:10:31 -- accel/accel.sh@42 -- # jq -r . 00:07:34.987 [2024-12-14 17:10:31.429162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.987 [2024-12-14 17:10:31.429237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198985 ] 00:07:34.987 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.987 [2024-12-14 17:10:31.499481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.987 [2024-12-14 17:10:31.533763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val= 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val= 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val=0x1 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val= 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val= 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val=copy 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val= 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val=software 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val=32 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val=32 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val=1 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val=Yes 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val= 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:34.987 17:10:31 -- accel/accel.sh@21 -- # val= 00:07:34.987 17:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:34.987 17:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:36.364 17:10:32 -- accel/accel.sh@21 -- # val= 00:07:36.364 17:10:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.364 17:10:32 -- accel/accel.sh@20 -- # IFS=: 00:07:36.364 17:10:32 -- accel/accel.sh@20 -- # read -r var val 00:07:36.364 17:10:32 -- accel/accel.sh@21 -- # val= 00:07:36.364 17:10:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.364 17:10:32 -- accel/accel.sh@20 -- # IFS=: 00:07:36.364 17:10:32 -- accel/accel.sh@20 -- # read -r var val 00:07:36.364 17:10:32 -- accel/accel.sh@21 -- # val= 00:07:36.364 17:10:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.364 17:10:32 -- accel/accel.sh@20 -- # IFS=: 00:07:36.364 17:10:32 -- accel/accel.sh@20 -- # read -r var val 00:07:36.364 17:10:32 -- accel/accel.sh@21 -- # val= 00:07:36.364 17:10:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.364 17:10:32 -- accel/accel.sh@20 -- # IFS=: 00:07:36.364 17:10:32 -- accel/accel.sh@20 -- # read -r var val 00:07:36.364 17:10:32 -- accel/accel.sh@21 -- # val= 00:07:36.364 17:10:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.364 17:10:32 -- accel/accel.sh@20 -- # IFS=: 00:07:36.364 17:10:32 -- accel/accel.sh@20 -- # read -r var val 00:07:36.364 17:10:32 -- accel/accel.sh@21 -- # val= 00:07:36.364 17:10:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.364 17:10:32 -- accel/accel.sh@20 -- # IFS=: 00:07:36.364 17:10:32 -- accel/accel.sh@20 -- # read -r var val 00:07:36.365 17:10:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.365 17:10:32 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:36.365 17:10:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.365 00:07:36.365 real 0m2.590s 00:07:36.365 user 0m2.336s 00:07:36.365 sys 0m0.251s 00:07:36.365 17:10:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.365 17:10:32 -- common/autotest_common.sh@10 -- # set +x 00:07:36.365 ************************************ 00:07:36.365 END TEST accel_copy 00:07:36.365 ************************************ 00:07:36.365 17:10:32 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:36.365 17:10:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:36.365 17:10:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.365 17:10:32 -- common/autotest_common.sh@10 -- # set +x 00:07:36.365 ************************************ 00:07:36.365 START TEST accel_fill 00:07:36.365 ************************************ 00:07:36.365 17:10:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:36.365 17:10:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.365 17:10:32 -- accel/accel.sh@17 -- # local accel_module 00:07:36.365 17:10:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:36.365 17:10:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:36.365 17:10:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.365 17:10:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.365 17:10:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.365 17:10:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.365 17:10:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.365 17:10:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.365 17:10:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.365 17:10:32 -- accel/accel.sh@42 -- # jq -r . 00:07:36.365 [2024-12-14 17:10:32.764980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:36.365 [2024-12-14 17:10:32.765062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199159 ] 00:07:36.365 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.365 [2024-12-14 17:10:32.835519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.365 [2024-12-14 17:10:32.870248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.743 17:10:34 -- accel/accel.sh@18 -- # out=' 00:07:37.743 SPDK Configuration: 00:07:37.743 Core mask: 0x1 00:07:37.743 00:07:37.743 Accel Perf Configuration: 00:07:37.743 Workload Type: fill 00:07:37.743 Fill pattern: 0x80 00:07:37.743 Transfer size: 4096 bytes 00:07:37.743 Vector count 1 00:07:37.743 Module: software 00:07:37.743 Queue depth: 64 00:07:37.743 Allocate depth: 64 00:07:37.743 # threads/core: 1 00:07:37.743 Run time: 1 seconds 00:07:37.743 Verify: Yes 00:07:37.743 00:07:37.743 Running for 1 seconds... 00:07:37.743 00:07:37.743 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.743 ------------------------------------------------------------------------------------ 00:07:37.743 0,0 700288/s 2735 MiB/s 0 0 00:07:37.743 ==================================================================================== 00:07:37.743 Total 700288/s 2735 MiB/s 0 0' 00:07:37.743 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.743 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.743 17:10:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:37.743 17:10:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:37.743 17:10:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.743 17:10:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.743 17:10:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.743 17:10:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.743 17:10:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.743 17:10:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.743 17:10:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.743 17:10:34 -- accel/accel.sh@42 -- # jq -r . 00:07:37.743 [2024-12-14 17:10:34.049231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:37.744 [2024-12-14 17:10:34.049297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199407 ] 00:07:37.744 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.744 [2024-12-14 17:10:34.117846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.744 [2024-12-14 17:10:34.151619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val= 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val= 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val=0x1 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val= 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val= 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val=fill 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val=0x80 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val= 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val=software 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val=64 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val=64 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val=1 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val=Yes 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val= 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:37.744 17:10:34 -- accel/accel.sh@21 -- # val= 00:07:37.744 17:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:37.744 17:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:38.681 17:10:35 -- accel/accel.sh@21 -- # val= 00:07:38.681 17:10:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.681 17:10:35 -- accel/accel.sh@20 -- # IFS=: 00:07:38.681 17:10:35 -- accel/accel.sh@20 -- # read -r var val 00:07:38.681 17:10:35 -- accel/accel.sh@21 -- # val= 00:07:38.681 17:10:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.681 17:10:35 -- accel/accel.sh@20 -- # IFS=: 00:07:38.681 17:10:35 -- accel/accel.sh@20 -- # read -r var val 00:07:38.681 17:10:35 -- accel/accel.sh@21 -- # val= 00:07:38.681 17:10:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.681 17:10:35 -- accel/accel.sh@20 -- # IFS=: 00:07:38.681 17:10:35 -- accel/accel.sh@20 -- # read -r var val 00:07:38.681 17:10:35 -- accel/accel.sh@21 -- # val= 00:07:38.681 17:10:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.681 17:10:35 -- accel/accel.sh@20 -- # IFS=: 00:07:38.681 17:10:35 -- accel/accel.sh@20 -- # read -r var val 00:07:38.681 17:10:35 -- accel/accel.sh@21 -- # val= 00:07:38.681 17:10:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.681 17:10:35 -- accel/accel.sh@20 -- # IFS=: 00:07:38.681 17:10:35 -- accel/accel.sh@20 -- # read -r var val 00:07:38.681 17:10:35 -- accel/accel.sh@21 -- # val= 00:07:38.681 17:10:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.681 17:10:35 -- accel/accel.sh@20 -- # IFS=: 00:07:38.681 17:10:35 -- accel/accel.sh@20 -- # read -r var val 00:07:38.681 17:10:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.681 17:10:35 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:38.681 17:10:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.681 00:07:38.681 real 0m2.577s 00:07:38.681 user 0m2.331s 00:07:38.681 sys 0m0.242s 00:07:38.681 17:10:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.681 17:10:35 -- common/autotest_common.sh@10 -- # set +x 00:07:38.681 ************************************ 00:07:38.681 END TEST accel_fill 00:07:38.681 ************************************ 00:07:38.681 17:10:35 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:38.681 17:10:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:38.681 17:10:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.681 17:10:35 -- common/autotest_common.sh@10 -- # set +x 00:07:38.681 ************************************ 00:07:38.681 START TEST accel_copy_crc32c 00:07:38.681 ************************************ 00:07:38.681 17:10:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:38.681 17:10:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.681 17:10:35 -- accel/accel.sh@17 -- # local accel_module 00:07:38.681 17:10:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:38.681 17:10:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:38.681 17:10:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.681 17:10:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.681 17:10:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.681 17:10:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.681 17:10:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.681 17:10:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.681 17:10:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.681 17:10:35 -- accel/accel.sh@42 -- # jq -r . 00:07:38.939 [2024-12-14 17:10:35.383175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.939 [2024-12-14 17:10:35.383262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199694 ] 00:07:38.939 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.939 [2024-12-14 17:10:35.452114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.939 [2024-12-14 17:10:35.486805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.317 17:10:36 -- accel/accel.sh@18 -- # out=' 00:07:40.317 SPDK Configuration: 00:07:40.317 Core mask: 0x1 00:07:40.317 00:07:40.317 Accel Perf Configuration: 00:07:40.317 Workload Type: copy_crc32c 00:07:40.317 CRC-32C seed: 0 00:07:40.317 Vector size: 4096 bytes 00:07:40.317 Transfer size: 4096 bytes 00:07:40.317 Vector count 1 00:07:40.317 Module: software 00:07:40.317 Queue depth: 32 00:07:40.317 Allocate depth: 32 00:07:40.317 # threads/core: 1 00:07:40.317 Run time: 1 seconds 00:07:40.317 Verify: Yes 00:07:40.317 00:07:40.317 Running for 1 seconds... 00:07:40.317 00:07:40.317 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.317 ------------------------------------------------------------------------------------ 00:07:40.317 0,0 346016/s 1351 MiB/s 0 0 00:07:40.317 ==================================================================================== 00:07:40.317 Total 346016/s 1351 MiB/s 0 0' 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:40.317 17:10:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:40.317 17:10:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.317 17:10:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.317 17:10:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.317 17:10:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.317 17:10:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.317 17:10:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.317 17:10:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.317 17:10:36 -- accel/accel.sh@42 -- # jq -r . 00:07:40.317 [2024-12-14 17:10:36.665091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.317 [2024-12-14 17:10:36.665157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199962 ] 00:07:40.317 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.317 [2024-12-14 17:10:36.733099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.317 [2024-12-14 17:10:36.767212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val= 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val= 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val=0x1 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val= 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val= 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val=0 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val= 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val=software 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val=32 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val=32 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val=1 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val=Yes 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val= 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.317 17:10:36 -- accel/accel.sh@21 -- # val= 00:07:40.317 17:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:40.317 17:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:41.301 17:10:37 -- accel/accel.sh@21 -- # val= 00:07:41.302 17:10:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.302 17:10:37 -- accel/accel.sh@20 -- # IFS=: 00:07:41.302 17:10:37 -- accel/accel.sh@20 -- # read -r var val 00:07:41.302 17:10:37 -- accel/accel.sh@21 -- # val= 00:07:41.302 17:10:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.302 17:10:37 -- accel/accel.sh@20 -- # IFS=: 00:07:41.302 17:10:37 -- accel/accel.sh@20 -- # read -r var val 00:07:41.302 17:10:37 -- accel/accel.sh@21 -- # val= 00:07:41.302 17:10:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.302 17:10:37 -- accel/accel.sh@20 -- # IFS=: 00:07:41.302 17:10:37 -- accel/accel.sh@20 -- # read -r var val 00:07:41.302 17:10:37 -- accel/accel.sh@21 -- # val= 00:07:41.302 17:10:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.302 17:10:37 -- accel/accel.sh@20 -- # IFS=: 00:07:41.302 17:10:37 -- accel/accel.sh@20 -- # read -r var val 00:07:41.302 17:10:37 -- accel/accel.sh@21 -- # val= 00:07:41.302 17:10:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.302 17:10:37 -- accel/accel.sh@20 -- # IFS=: 00:07:41.302 17:10:37 -- accel/accel.sh@20 -- # read -r var val 00:07:41.302 17:10:37 -- accel/accel.sh@21 -- # val= 00:07:41.302 17:10:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.302 17:10:37 -- accel/accel.sh@20 -- # IFS=: 00:07:41.302 17:10:37 -- accel/accel.sh@20 -- # read -r var val 00:07:41.302 17:10:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:41.302 17:10:37 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:41.302 17:10:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.302 00:07:41.302 real 0m2.575s 00:07:41.302 user 0m2.317s 00:07:41.302 sys 0m0.254s 00:07:41.302 17:10:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.302 17:10:37 -- common/autotest_common.sh@10 -- # set +x 00:07:41.302 ************************************ 00:07:41.302 END TEST accel_copy_crc32c 00:07:41.302 ************************************ 00:07:41.302 17:10:37 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:41.302 17:10:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:41.302 17:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.302 17:10:37 -- common/autotest_common.sh@10 -- # set +x 00:07:41.302 ************************************ 00:07:41.302 START TEST accel_copy_crc32c_C2 00:07:41.302 ************************************ 00:07:41.302 17:10:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:41.302 17:10:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.632 17:10:37 -- accel/accel.sh@17 -- # local accel_module 00:07:41.632 17:10:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:41.632 17:10:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:41.632 17:10:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.632 17:10:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.632 17:10:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.632 17:10:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.632 17:10:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.632 17:10:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.632 17:10:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.632 17:10:37 -- accel/accel.sh@42 -- # jq -r . 00:07:41.632 [2024-12-14 17:10:37.995915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.632 [2024-12-14 17:10:37.995985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200245 ] 00:07:41.632 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.632 [2024-12-14 17:10:38.064400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.632 [2024-12-14 17:10:38.099263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.011 17:10:39 -- accel/accel.sh@18 -- # out=' 00:07:43.011 SPDK Configuration: 00:07:43.011 Core mask: 0x1 00:07:43.011 00:07:43.011 Accel Perf Configuration: 00:07:43.011 Workload Type: copy_crc32c 00:07:43.011 CRC-32C seed: 0 00:07:43.011 Vector size: 4096 bytes 00:07:43.011 Transfer size: 8192 bytes 00:07:43.011 Vector count 2 00:07:43.011 Module: software 00:07:43.011 Queue depth: 32 00:07:43.011 Allocate depth: 32 00:07:43.011 # threads/core: 1 00:07:43.011 Run time: 1 seconds 00:07:43.011 Verify: Yes 00:07:43.011 00:07:43.011 Running for 1 seconds... 00:07:43.011 00:07:43.011 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:43.011 ------------------------------------------------------------------------------------ 00:07:43.011 0,0 251456/s 1964 MiB/s 0 0 00:07:43.011 ==================================================================================== 00:07:43.011 Total 251456/s 982 MiB/s 0 0' 00:07:43.011 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.011 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.011 17:10:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:43.011 17:10:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.011 17:10:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:43.011 17:10:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.011 17:10:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.011 17:10:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.011 17:10:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.011 17:10:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.011 17:10:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.011 17:10:39 -- accel/accel.sh@42 -- # jq -r . 00:07:43.011 [2024-12-14 17:10:39.289036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:43.011 [2024-12-14 17:10:39.289106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200478 ] 00:07:43.011 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.011 [2024-12-14 17:10:39.359729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.011 [2024-12-14 17:10:39.395113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.011 17:10:39 -- accel/accel.sh@21 -- # val= 00:07:43.011 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.011 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.011 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.011 17:10:39 -- accel/accel.sh@21 -- # val= 00:07:43.011 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.011 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.011 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.011 17:10:39 -- accel/accel.sh@21 -- # val=0x1 00:07:43.011 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.011 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.011 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.011 17:10:39 -- accel/accel.sh@21 -- # val= 00:07:43.011 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.011 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.011 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.011 17:10:39 -- accel/accel.sh@21 -- # val= 00:07:43.011 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val=0 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val= 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val=software 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val=32 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val=32 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val=1 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val=Yes 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val= 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.012 17:10:39 -- accel/accel.sh@21 -- # val= 00:07:43.012 17:10:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # IFS=: 00:07:43.012 17:10:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.948 17:10:40 -- accel/accel.sh@21 -- # val= 00:07:43.948 17:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.948 17:10:40 -- accel/accel.sh@20 -- # IFS=: 00:07:43.948 17:10:40 -- accel/accel.sh@20 -- # read -r var val 00:07:43.948 17:10:40 -- accel/accel.sh@21 -- # val= 00:07:43.948 17:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.948 17:10:40 -- accel/accel.sh@20 -- # IFS=: 00:07:43.948 17:10:40 -- accel/accel.sh@20 -- # read -r var val 00:07:43.948 17:10:40 -- accel/accel.sh@21 -- # val= 00:07:43.948 17:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.948 17:10:40 -- accel/accel.sh@20 -- # IFS=: 00:07:43.948 17:10:40 -- accel/accel.sh@20 -- # read -r var val 00:07:43.948 17:10:40 -- accel/accel.sh@21 -- # val= 00:07:43.948 17:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.948 17:10:40 -- accel/accel.sh@20 -- # IFS=: 00:07:43.948 17:10:40 -- accel/accel.sh@20 -- # read -r var val 00:07:43.948 17:10:40 -- accel/accel.sh@21 -- # val= 00:07:43.948 17:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.948 17:10:40 -- accel/accel.sh@20 -- # IFS=: 00:07:43.948 17:10:40 -- accel/accel.sh@20 -- # read -r var val 00:07:43.948 17:10:40 -- accel/accel.sh@21 -- # val= 00:07:43.948 17:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.948 17:10:40 -- accel/accel.sh@20 -- # IFS=: 00:07:43.948 17:10:40 -- accel/accel.sh@20 -- # read -r var val 00:07:43.948 17:10:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.948 17:10:40 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:43.948 17:10:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.948 00:07:43.948 real 0m2.590s 00:07:43.948 user 0m2.328s 00:07:43.948 sys 0m0.258s 00:07:43.948 17:10:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.948 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:43.948 ************************************ 00:07:43.948 END TEST accel_copy_crc32c_C2 00:07:43.948 ************************************ 00:07:43.948 17:10:40 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:43.948 17:10:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:43.948 17:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.948 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:43.948 ************************************ 00:07:43.948 START TEST accel_dualcast 00:07:43.948 ************************************ 00:07:43.948 17:10:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:43.948 17:10:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.948 17:10:40 -- accel/accel.sh@17 -- # local accel_module 00:07:43.948 17:10:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:43.948 17:10:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:43.948 17:10:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.948 17:10:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.948 17:10:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.948 17:10:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.948 17:10:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.948 17:10:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.948 17:10:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.948 17:10:40 -- accel/accel.sh@42 -- # jq -r . 00:07:43.949 [2024-12-14 17:10:40.629375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:43.949 [2024-12-14 17:10:40.629440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200673 ] 00:07:44.208 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.208 [2024-12-14 17:10:40.701881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.208 [2024-12-14 17:10:40.738187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.587 17:10:41 -- accel/accel.sh@18 -- # out=' 00:07:45.587 SPDK Configuration: 00:07:45.587 Core mask: 0x1 00:07:45.587 00:07:45.587 Accel Perf Configuration: 00:07:45.587 Workload Type: dualcast 00:07:45.587 Transfer size: 4096 bytes 00:07:45.587 Vector count 1 00:07:45.587 Module: software 00:07:45.587 Queue depth: 32 00:07:45.587 Allocate depth: 32 00:07:45.587 # threads/core: 1 00:07:45.587 Run time: 1 seconds 00:07:45.587 Verify: Yes 00:07:45.587 00:07:45.587 Running for 1 seconds... 00:07:45.587 00:07:45.587 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.587 ------------------------------------------------------------------------------------ 00:07:45.587 0,0 532448/s 2079 MiB/s 0 0 00:07:45.587 ==================================================================================== 00:07:45.587 Total 532448/s 2079 MiB/s 0 0' 00:07:45.587 17:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:45.587 17:10:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:45.587 17:10:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:45.587 17:10:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.587 17:10:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.587 17:10:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.587 17:10:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.587 17:10:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.587 17:10:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.587 17:10:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.587 17:10:41 -- accel/accel.sh@42 -- # jq -r . 00:07:45.587 [2024-12-14 17:10:41.927574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.587 [2024-12-14 17:10:41.927642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200833 ] 00:07:45.587 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.587 [2024-12-14 17:10:41.996363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.587 [2024-12-14 17:10:42.030808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.587 17:10:42 -- accel/accel.sh@21 -- # val= 00:07:45.587 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.587 17:10:42 -- accel/accel.sh@21 -- # val= 00:07:45.587 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.587 17:10:42 -- accel/accel.sh@21 -- # val=0x1 00:07:45.587 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.587 17:10:42 -- accel/accel.sh@21 -- # val= 00:07:45.587 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.587 17:10:42 -- accel/accel.sh@21 -- # val= 00:07:45.587 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.587 17:10:42 -- accel/accel.sh@21 -- # val=dualcast 00:07:45.587 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.587 17:10:42 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.587 17:10:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.587 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.587 17:10:42 -- accel/accel.sh@21 -- # val= 00:07:45.587 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.587 17:10:42 -- accel/accel.sh@21 -- # val=software 00:07:45.587 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.587 17:10:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.587 17:10:42 -- accel/accel.sh@21 -- # val=32 00:07:45.587 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.587 17:10:42 -- accel/accel.sh@21 -- # val=32 00:07:45.587 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.587 17:10:42 -- accel/accel.sh@21 -- # val=1 00:07:45.587 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.587 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.588 17:10:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.588 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.588 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.588 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.588 17:10:42 -- accel/accel.sh@21 -- # val=Yes 00:07:45.588 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.588 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.588 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.588 17:10:42 -- accel/accel.sh@21 -- # val= 00:07:45.588 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.588 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.588 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.588 17:10:42 -- accel/accel.sh@21 -- # val= 00:07:45.588 17:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.588 17:10:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.588 17:10:42 -- accel/accel.sh@20 -- # read -r var val 00:07:46.524 17:10:43 -- accel/accel.sh@21 -- # val= 00:07:46.524 17:10:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.524 17:10:43 -- accel/accel.sh@20 -- # IFS=: 00:07:46.524 17:10:43 -- accel/accel.sh@20 -- # read -r var val 00:07:46.524 17:10:43 -- accel/accel.sh@21 -- # val= 00:07:46.524 17:10:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.524 17:10:43 -- accel/accel.sh@20 -- # IFS=: 00:07:46.524 17:10:43 -- accel/accel.sh@20 -- # read -r var val 00:07:46.524 17:10:43 -- accel/accel.sh@21 -- # val= 00:07:46.524 17:10:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.524 17:10:43 -- accel/accel.sh@20 -- # IFS=: 00:07:46.524 17:10:43 -- accel/accel.sh@20 -- # read -r var val 00:07:46.524 17:10:43 -- accel/accel.sh@21 -- # val= 00:07:46.524 17:10:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.524 17:10:43 -- accel/accel.sh@20 -- # IFS=: 00:07:46.524 17:10:43 -- accel/accel.sh@20 -- # read -r var val 00:07:46.524 17:10:43 -- accel/accel.sh@21 -- # val= 00:07:46.524 17:10:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.524 17:10:43 -- accel/accel.sh@20 -- # IFS=: 00:07:46.524 17:10:43 -- accel/accel.sh@20 -- # read -r var val 00:07:46.524 17:10:43 -- accel/accel.sh@21 -- # val= 00:07:46.524 17:10:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.524 17:10:43 -- accel/accel.sh@20 -- # IFS=: 00:07:46.524 17:10:43 -- accel/accel.sh@20 -- # read -r var val 00:07:46.524 17:10:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.524 17:10:43 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:46.524 17:10:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.524 00:07:46.524 real 0m2.592s 00:07:46.524 user 0m2.330s 00:07:46.524 sys 0m0.260s 00:07:46.524 17:10:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.524 17:10:43 -- common/autotest_common.sh@10 -- # set +x 00:07:46.524 ************************************ 00:07:46.524 END TEST accel_dualcast 00:07:46.524 ************************************ 00:07:46.783 17:10:43 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:46.783 17:10:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:46.783 17:10:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.783 17:10:43 -- common/autotest_common.sh@10 -- # set +x 00:07:46.783 ************************************ 00:07:46.783 START TEST accel_compare 00:07:46.783 ************************************ 00:07:46.783 17:10:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:46.783 17:10:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.783 17:10:43 -- accel/accel.sh@17 -- # local accel_module 00:07:46.783 17:10:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:46.783 17:10:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:46.783 17:10:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.783 17:10:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.783 17:10:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.783 17:10:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.783 17:10:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.783 17:10:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.783 17:10:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.783 17:10:43 -- accel/accel.sh@42 -- # jq -r . 00:07:46.783 [2024-12-14 17:10:43.265639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.783 [2024-12-14 17:10:43.265725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201111 ] 00:07:46.783 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.783 [2024-12-14 17:10:43.335713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.783 [2024-12-14 17:10:43.370516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.162 17:10:44 -- accel/accel.sh@18 -- # out=' 00:07:48.162 SPDK Configuration: 00:07:48.162 Core mask: 0x1 00:07:48.162 00:07:48.162 Accel Perf Configuration: 00:07:48.162 Workload Type: compare 00:07:48.162 Transfer size: 4096 bytes 00:07:48.162 Vector count 1 00:07:48.162 Module: software 00:07:48.162 Queue depth: 32 00:07:48.162 Allocate depth: 32 00:07:48.162 # threads/core: 1 00:07:48.162 Run time: 1 seconds 00:07:48.162 Verify: Yes 00:07:48.162 00:07:48.162 Running for 1 seconds... 00:07:48.162 00:07:48.162 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:48.162 ------------------------------------------------------------------------------------ 00:07:48.162 0,0 640160/s 2500 MiB/s 0 0 00:07:48.162 ==================================================================================== 00:07:48.162 Total 640160/s 2500 MiB/s 0 0' 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:48.162 17:10:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:48.162 17:10:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.162 17:10:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.162 17:10:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.162 17:10:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.162 17:10:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.162 17:10:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.162 17:10:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.162 17:10:44 -- accel/accel.sh@42 -- # jq -r . 00:07:48.162 [2024-12-14 17:10:44.562279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.162 [2024-12-14 17:10:44.562349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201383 ] 00:07:48.162 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.162 [2024-12-14 17:10:44.631893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.162 [2024-12-14 17:10:44.665784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val= 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val= 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val=0x1 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val= 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val= 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val=compare 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val= 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val=software 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val=32 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val=32 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val=1 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val=Yes 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val= 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:48.162 17:10:44 -- accel/accel.sh@21 -- # val= 00:07:48.162 17:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:48.162 17:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:49.541 17:10:45 -- accel/accel.sh@21 -- # val= 00:07:49.541 17:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.541 17:10:45 -- accel/accel.sh@20 -- # IFS=: 00:07:49.541 17:10:45 -- accel/accel.sh@20 -- # read -r var val 00:07:49.541 17:10:45 -- accel/accel.sh@21 -- # val= 00:07:49.541 17:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.541 17:10:45 -- accel/accel.sh@20 -- # IFS=: 00:07:49.541 17:10:45 -- accel/accel.sh@20 -- # read -r var val 00:07:49.541 17:10:45 -- accel/accel.sh@21 -- # val= 00:07:49.541 17:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.541 17:10:45 -- accel/accel.sh@20 -- # IFS=: 00:07:49.541 17:10:45 -- accel/accel.sh@20 -- # read -r var val 00:07:49.541 17:10:45 -- accel/accel.sh@21 -- # val= 00:07:49.541 17:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.541 17:10:45 -- accel/accel.sh@20 -- # IFS=: 00:07:49.541 17:10:45 -- accel/accel.sh@20 -- # read -r var val 00:07:49.541 17:10:45 -- accel/accel.sh@21 -- # val= 00:07:49.541 17:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.541 17:10:45 -- accel/accel.sh@20 -- # IFS=: 00:07:49.541 17:10:45 -- accel/accel.sh@20 -- # read -r var val 00:07:49.541 17:10:45 -- accel/accel.sh@21 -- # val= 00:07:49.541 17:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.541 17:10:45 -- accel/accel.sh@20 -- # IFS=: 00:07:49.541 17:10:45 -- accel/accel.sh@20 -- # read -r var val 00:07:49.541 17:10:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:49.541 17:10:45 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:49.541 17:10:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.541 00:07:49.541 real 0m2.593s 00:07:49.541 user 0m2.337s 00:07:49.541 sys 0m0.252s 00:07:49.541 17:10:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.541 17:10:45 -- common/autotest_common.sh@10 -- # set +x 00:07:49.541 ************************************ 00:07:49.541 END TEST accel_compare 00:07:49.541 ************************************ 00:07:49.541 17:10:45 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:49.541 17:10:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:49.541 17:10:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.541 17:10:45 -- common/autotest_common.sh@10 -- # set +x 00:07:49.541 ************************************ 00:07:49.541 START TEST accel_xor 00:07:49.541 ************************************ 00:07:49.541 17:10:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:49.541 17:10:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.541 17:10:45 -- accel/accel.sh@17 -- # local accel_module 00:07:49.541 17:10:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:49.541 17:10:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:49.541 17:10:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.541 17:10:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.541 17:10:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.541 17:10:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.541 17:10:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.541 17:10:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.541 17:10:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.541 17:10:45 -- accel/accel.sh@42 -- # jq -r . 00:07:49.541 [2024-12-14 17:10:45.899932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:49.541 [2024-12-14 17:10:45.900001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201667 ] 00:07:49.541 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.541 [2024-12-14 17:10:45.968243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.541 [2024-12-14 17:10:46.003206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.918 17:10:47 -- accel/accel.sh@18 -- # out=' 00:07:50.918 SPDK Configuration: 00:07:50.918 Core mask: 0x1 00:07:50.918 00:07:50.918 Accel Perf Configuration: 00:07:50.918 Workload Type: xor 00:07:50.918 Source buffers: 2 00:07:50.918 Transfer size: 4096 bytes 00:07:50.918 Vector count 1 00:07:50.918 Module: software 00:07:50.918 Queue depth: 32 00:07:50.918 Allocate depth: 32 00:07:50.918 # threads/core: 1 00:07:50.918 Run time: 1 seconds 00:07:50.918 Verify: Yes 00:07:50.918 00:07:50.918 Running for 1 seconds... 00:07:50.918 00:07:50.918 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.918 ------------------------------------------------------------------------------------ 00:07:50.918 0,0 507008/s 1980 MiB/s 0 0 00:07:50.918 ==================================================================================== 00:07:50.918 Total 507008/s 1980 MiB/s 0 0' 00:07:50.918 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.918 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.918 17:10:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:50.918 17:10:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:50.918 17:10:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.918 17:10:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.918 17:10:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.918 17:10:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.918 17:10:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.918 17:10:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.918 17:10:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.918 17:10:47 -- accel/accel.sh@42 -- # jq -r . 00:07:50.918 [2024-12-14 17:10:47.192405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:50.918 [2024-12-14 17:10:47.192472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201936 ] 00:07:50.918 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.918 [2024-12-14 17:10:47.260437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.918 [2024-12-14 17:10:47.294737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.918 17:10:47 -- accel/accel.sh@21 -- # val= 00:07:50.918 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.918 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.918 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.918 17:10:47 -- accel/accel.sh@21 -- # val= 00:07:50.918 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val=0x1 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val= 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val= 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val=xor 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val=2 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val= 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val=software 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val=32 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val=32 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val=1 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val=Yes 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val= 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:50.919 17:10:47 -- accel/accel.sh@21 -- # val= 00:07:50.919 17:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:50.919 17:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:51.856 17:10:48 -- accel/accel.sh@21 -- # val= 00:07:51.856 17:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.856 17:10:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.856 17:10:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.856 17:10:48 -- accel/accel.sh@21 -- # val= 00:07:51.856 17:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.856 17:10:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.856 17:10:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.856 17:10:48 -- accel/accel.sh@21 -- # val= 00:07:51.856 17:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.856 17:10:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.856 17:10:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.856 17:10:48 -- accel/accel.sh@21 -- # val= 00:07:51.856 17:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.856 17:10:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.856 17:10:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.856 17:10:48 -- accel/accel.sh@21 -- # val= 00:07:51.856 17:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.856 17:10:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.856 17:10:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.856 17:10:48 -- accel/accel.sh@21 -- # val= 00:07:51.856 17:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.856 17:10:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.856 17:10:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.856 17:10:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:51.856 17:10:48 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:51.856 17:10:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.856 00:07:51.856 real 0m2.587s 00:07:51.856 user 0m2.338s 00:07:51.856 sys 0m0.247s 00:07:51.856 17:10:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.856 17:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:51.856 ************************************ 00:07:51.856 END TEST accel_xor 00:07:51.856 ************************************ 00:07:51.856 17:10:48 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:51.856 17:10:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:51.856 17:10:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.856 17:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:51.856 ************************************ 00:07:51.856 START TEST accel_xor 00:07:51.856 ************************************ 00:07:51.856 17:10:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:51.856 17:10:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.856 17:10:48 -- accel/accel.sh@17 -- # local accel_module 00:07:51.856 17:10:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:51.856 17:10:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:51.856 17:10:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.856 17:10:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.856 17:10:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.856 17:10:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.856 17:10:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.856 17:10:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.856 17:10:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.856 17:10:48 -- accel/accel.sh@42 -- # jq -r . 00:07:51.856 [2024-12-14 17:10:48.530415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:51.856 [2024-12-14 17:10:48.530511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202190 ] 00:07:52.116 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.116 [2024-12-14 17:10:48.601723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.116 [2024-12-14 17:10:48.637425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.494 17:10:49 -- accel/accel.sh@18 -- # out=' 00:07:53.494 SPDK Configuration: 00:07:53.494 Core mask: 0x1 00:07:53.494 00:07:53.494 Accel Perf Configuration: 00:07:53.494 Workload Type: xor 00:07:53.494 Source buffers: 3 00:07:53.494 Transfer size: 4096 bytes 00:07:53.494 Vector count 1 00:07:53.494 Module: software 00:07:53.494 Queue depth: 32 00:07:53.494 Allocate depth: 32 00:07:53.494 # threads/core: 1 00:07:53.494 Run time: 1 seconds 00:07:53.494 Verify: Yes 00:07:53.494 00:07:53.494 Running for 1 seconds... 00:07:53.494 00:07:53.494 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.494 ------------------------------------------------------------------------------------ 00:07:53.494 0,0 469088/s 1832 MiB/s 0 0 00:07:53.494 ==================================================================================== 00:07:53.494 Total 469088/s 1832 MiB/s 0 0' 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:53.494 17:10:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:53.494 17:10:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.494 17:10:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.494 17:10:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.494 17:10:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.494 17:10:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.494 17:10:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.494 17:10:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.494 17:10:49 -- accel/accel.sh@42 -- # jq -r . 00:07:53.494 [2024-12-14 17:10:49.828836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:53.494 [2024-12-14 17:10:49.828908] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202336 ] 00:07:53.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.494 [2024-12-14 17:10:49.897847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.494 [2024-12-14 17:10:49.932370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val= 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val= 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val=0x1 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val= 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val= 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val=xor 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val=3 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val= 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val=software 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@23 -- # accel_module=software 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val=32 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val=32 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val=1 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val=Yes 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val= 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:53.494 17:10:49 -- accel/accel.sh@21 -- # val= 00:07:53.494 17:10:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # IFS=: 00:07:53.494 17:10:49 -- accel/accel.sh@20 -- # read -r var val 00:07:54.430 17:10:51 -- accel/accel.sh@21 -- # val= 00:07:54.430 17:10:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.430 17:10:51 -- accel/accel.sh@20 -- # IFS=: 00:07:54.430 17:10:51 -- accel/accel.sh@20 -- # read -r var val 00:07:54.430 17:10:51 -- accel/accel.sh@21 -- # val= 00:07:54.430 17:10:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.430 17:10:51 -- accel/accel.sh@20 -- # IFS=: 00:07:54.430 17:10:51 -- accel/accel.sh@20 -- # read -r var val 00:07:54.430 17:10:51 -- accel/accel.sh@21 -- # val= 00:07:54.430 17:10:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.430 17:10:51 -- accel/accel.sh@20 -- # IFS=: 00:07:54.430 17:10:51 -- accel/accel.sh@20 -- # read -r var val 00:07:54.430 17:10:51 -- accel/accel.sh@21 -- # val= 00:07:54.430 17:10:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.430 17:10:51 -- accel/accel.sh@20 -- # IFS=: 00:07:54.430 17:10:51 -- accel/accel.sh@20 -- # read -r var val 00:07:54.430 17:10:51 -- accel/accel.sh@21 -- # val= 00:07:54.430 17:10:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.430 17:10:51 -- accel/accel.sh@20 -- # IFS=: 00:07:54.430 17:10:51 -- accel/accel.sh@20 -- # read -r var val 00:07:54.430 17:10:51 -- accel/accel.sh@21 -- # val= 00:07:54.430 17:10:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.430 17:10:51 -- accel/accel.sh@20 -- # IFS=: 00:07:54.430 17:10:51 -- accel/accel.sh@20 -- # read -r var val 00:07:54.430 17:10:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:54.430 17:10:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:54.430 17:10:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.430 00:07:54.430 real 0m2.595s 00:07:54.430 user 0m2.338s 00:07:54.430 sys 0m0.255s 00:07:54.430 17:10:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.430 17:10:51 -- common/autotest_common.sh@10 -- # set +x 00:07:54.430 ************************************ 00:07:54.430 END TEST accel_xor 00:07:54.430 ************************************ 00:07:54.690 17:10:51 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:54.690 17:10:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:54.690 17:10:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.690 17:10:51 -- common/autotest_common.sh@10 -- # set +x 00:07:54.690 ************************************ 00:07:54.690 START TEST accel_dif_verify 00:07:54.690 ************************************ 00:07:54.690 17:10:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:54.690 17:10:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.690 17:10:51 -- accel/accel.sh@17 -- # local accel_module 00:07:54.690 17:10:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:54.690 17:10:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:54.690 17:10:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.690 17:10:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.690 17:10:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.690 17:10:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.690 17:10:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.690 17:10:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.690 17:10:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.690 17:10:51 -- accel/accel.sh@42 -- # jq -r . 00:07:54.690 [2024-12-14 17:10:51.173526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.690 [2024-12-14 17:10:51.173594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202533 ] 00:07:54.690 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.690 [2024-12-14 17:10:51.244366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.690 [2024-12-14 17:10:51.279751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.070 17:10:52 -- accel/accel.sh@18 -- # out=' 00:07:56.070 SPDK Configuration: 00:07:56.070 Core mask: 0x1 00:07:56.070 00:07:56.070 Accel Perf Configuration: 00:07:56.070 Workload Type: dif_verify 00:07:56.070 Vector size: 4096 bytes 00:07:56.070 Transfer size: 4096 bytes 00:07:56.070 Block size: 512 bytes 00:07:56.070 Metadata size: 8 bytes 00:07:56.070 Vector count 1 00:07:56.070 Module: software 00:07:56.070 Queue depth: 32 00:07:56.070 Allocate depth: 32 00:07:56.070 # threads/core: 1 00:07:56.070 Run time: 1 seconds 00:07:56.070 Verify: No 00:07:56.070 00:07:56.070 Running for 1 seconds... 00:07:56.070 00:07:56.070 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:56.070 ------------------------------------------------------------------------------------ 00:07:56.070 0,0 137696/s 546 MiB/s 0 0 00:07:56.070 ==================================================================================== 00:07:56.070 Total 137696/s 537 MiB/s 0 0' 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:56.070 17:10:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:56.070 17:10:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.070 17:10:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.070 17:10:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.070 17:10:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.070 17:10:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.070 17:10:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.070 17:10:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.070 17:10:52 -- accel/accel.sh@42 -- # jq -r . 00:07:56.070 [2024-12-14 17:10:52.470082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.070 [2024-12-14 17:10:52.470155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202795 ] 00:07:56.070 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.070 [2024-12-14 17:10:52.538801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.070 [2024-12-14 17:10:52.572907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val= 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val= 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val=0x1 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val= 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val= 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val=dif_verify 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val= 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val=software 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val=32 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val=32 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val=1 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val=No 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val= 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:56.070 17:10:52 -- accel/accel.sh@21 -- # val= 00:07:56.070 17:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # IFS=: 00:07:56.070 17:10:52 -- accel/accel.sh@20 -- # read -r var val 00:07:57.448 17:10:53 -- accel/accel.sh@21 -- # val= 00:07:57.448 17:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.448 17:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:57.448 17:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:57.448 17:10:53 -- accel/accel.sh@21 -- # val= 00:07:57.448 17:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.448 17:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:57.448 17:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:57.448 17:10:53 -- accel/accel.sh@21 -- # val= 00:07:57.448 17:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.448 17:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:57.448 17:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:57.448 17:10:53 -- accel/accel.sh@21 -- # val= 00:07:57.448 17:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.448 17:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:57.448 17:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:57.448 17:10:53 -- accel/accel.sh@21 -- # val= 00:07:57.448 17:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.448 17:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:57.448 17:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:57.448 17:10:53 -- accel/accel.sh@21 -- # val= 00:07:57.448 17:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.449 17:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:57.449 17:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:57.449 17:10:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:57.449 17:10:53 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:57.449 17:10:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.449 00:07:57.449 real 0m2.593s 00:07:57.449 user 0m2.337s 00:07:57.449 sys 0m0.255s 00:07:57.449 17:10:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.449 17:10:53 -- common/autotest_common.sh@10 -- # set +x 00:07:57.449 ************************************ 00:07:57.449 END TEST accel_dif_verify 00:07:57.449 ************************************ 00:07:57.449 17:10:53 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:57.449 17:10:53 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:57.449 17:10:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.449 17:10:53 -- common/autotest_common.sh@10 -- # set +x 00:07:57.449 ************************************ 00:07:57.449 START TEST accel_dif_generate 00:07:57.449 ************************************ 00:07:57.449 17:10:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:57.449 17:10:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:57.449 17:10:53 -- accel/accel.sh@17 -- # local accel_module 00:07:57.449 17:10:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:57.449 17:10:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:57.449 17:10:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.449 17:10:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:57.449 17:10:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.449 17:10:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.449 17:10:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:57.449 17:10:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:57.449 17:10:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:57.449 17:10:53 -- accel/accel.sh@42 -- # jq -r . 00:07:57.449 [2024-12-14 17:10:53.811055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:57.449 [2024-12-14 17:10:53.811125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203082 ] 00:07:57.449 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.449 [2024-12-14 17:10:53.880156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.449 [2024-12-14 17:10:53.915203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.828 17:10:55 -- accel/accel.sh@18 -- # out=' 00:07:58.828 SPDK Configuration: 00:07:58.828 Core mask: 0x1 00:07:58.828 00:07:58.828 Accel Perf Configuration: 00:07:58.828 Workload Type: dif_generate 00:07:58.828 Vector size: 4096 bytes 00:07:58.828 Transfer size: 4096 bytes 00:07:58.828 Block size: 512 bytes 00:07:58.828 Metadata size: 8 bytes 00:07:58.828 Vector count 1 00:07:58.828 Module: software 00:07:58.828 Queue depth: 32 00:07:58.828 Allocate depth: 32 00:07:58.828 # threads/core: 1 00:07:58.828 Run time: 1 seconds 00:07:58.828 Verify: No 00:07:58.828 00:07:58.828 Running for 1 seconds... 00:07:58.828 00:07:58.828 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:58.828 ------------------------------------------------------------------------------------ 00:07:58.828 0,0 164480/s 652 MiB/s 0 0 00:07:58.828 ==================================================================================== 00:07:58.828 Total 164480/s 642 MiB/s 0 0' 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:58.828 17:10:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:58.828 17:10:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.828 17:10:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.828 17:10:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.828 17:10:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.828 17:10:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.828 17:10:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.828 17:10:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.828 17:10:55 -- accel/accel.sh@42 -- # jq -r . 00:07:58.828 [2024-12-14 17:10:55.106025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:58.828 [2024-12-14 17:10:55.106094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203350 ] 00:07:58.828 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.828 [2024-12-14 17:10:55.175817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.828 [2024-12-14 17:10:55.210430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val= 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val= 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val=0x1 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val= 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val= 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val=dif_generate 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val= 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val=software 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val=32 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val=32 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val=1 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val=No 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val= 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:58.828 17:10:55 -- accel/accel.sh@21 -- # val= 00:07:58.828 17:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # IFS=: 00:07:58.828 17:10:55 -- accel/accel.sh@20 -- # read -r var val 00:07:59.765 17:10:56 -- accel/accel.sh@21 -- # val= 00:07:59.765 17:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.765 17:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:59.765 17:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:59.765 17:10:56 -- accel/accel.sh@21 -- # val= 00:07:59.765 17:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.765 17:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:59.765 17:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:59.765 17:10:56 -- accel/accel.sh@21 -- # val= 00:07:59.765 17:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.765 17:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:59.765 17:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:59.765 17:10:56 -- accel/accel.sh@21 -- # val= 00:07:59.765 17:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.765 17:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:59.765 17:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:59.765 17:10:56 -- accel/accel.sh@21 -- # val= 00:07:59.765 17:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.765 17:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:59.765 17:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:59.765 17:10:56 -- accel/accel.sh@21 -- # val= 00:07:59.765 17:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.765 17:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:59.765 17:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:59.765 17:10:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.765 17:10:56 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:59.765 17:10:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.765 00:07:59.765 real 0m2.594s 00:07:59.765 user 0m2.329s 00:07:59.765 sys 0m0.263s 00:07:59.765 17:10:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.765 17:10:56 -- common/autotest_common.sh@10 -- # set +x 00:07:59.765 ************************************ 00:07:59.765 END TEST accel_dif_generate 00:07:59.765 ************************************ 00:07:59.765 17:10:56 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:59.765 17:10:56 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:59.765 17:10:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.765 17:10:56 -- common/autotest_common.sh@10 -- # set +x 00:07:59.765 ************************************ 00:07:59.765 START TEST accel_dif_generate_copy 00:07:59.765 ************************************ 00:07:59.765 17:10:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:59.765 17:10:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.765 17:10:56 -- accel/accel.sh@17 -- # local accel_module 00:07:59.765 17:10:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:59.765 17:10:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:59.765 17:10:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.765 17:10:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.765 17:10:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.765 17:10:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.765 17:10:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.765 17:10:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.765 17:10:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.765 17:10:56 -- accel/accel.sh@42 -- # jq -r . 00:07:59.765 [2024-12-14 17:10:56.448828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.765 [2024-12-14 17:10:56.448897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203636 ] 00:08:00.025 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.025 [2024-12-14 17:10:56.519890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.025 [2024-12-14 17:10:56.554973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.402 17:10:57 -- accel/accel.sh@18 -- # out=' 00:08:01.402 SPDK Configuration: 00:08:01.402 Core mask: 0x1 00:08:01.402 00:08:01.402 Accel Perf Configuration: 00:08:01.402 Workload Type: dif_generate_copy 00:08:01.402 Vector size: 4096 bytes 00:08:01.402 Transfer size: 4096 bytes 00:08:01.402 Vector count 1 00:08:01.402 Module: software 00:08:01.402 Queue depth: 32 00:08:01.402 Allocate depth: 32 00:08:01.402 # threads/core: 1 00:08:01.402 Run time: 1 seconds 00:08:01.402 Verify: No 00:08:01.402 00:08:01.402 Running for 1 seconds... 00:08:01.402 00:08:01.402 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:01.402 ------------------------------------------------------------------------------------ 00:08:01.402 0,0 127392/s 505 MiB/s 0 0 00:08:01.402 ==================================================================================== 00:08:01.403 Total 127392/s 497 MiB/s 0 0' 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:01.403 17:10:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:01.403 17:10:57 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.403 17:10:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.403 17:10:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.403 17:10:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.403 17:10:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.403 17:10:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.403 17:10:57 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.403 17:10:57 -- accel/accel.sh@42 -- # jq -r . 00:08:01.403 [2024-12-14 17:10:57.745622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:01.403 [2024-12-14 17:10:57.745694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203841 ] 00:08:01.403 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.403 [2024-12-14 17:10:57.815430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.403 [2024-12-14 17:10:57.850662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val= 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val= 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val=0x1 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val= 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val= 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val= 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val=software 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@23 -- # accel_module=software 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val=32 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val=32 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val=1 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val=No 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val= 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:01.403 17:10:57 -- accel/accel.sh@21 -- # val= 00:08:01.403 17:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # IFS=: 00:08:01.403 17:10:57 -- accel/accel.sh@20 -- # read -r var val 00:08:02.340 17:10:59 -- accel/accel.sh@21 -- # val= 00:08:02.340 17:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.340 17:10:59 -- accel/accel.sh@20 -- # IFS=: 00:08:02.340 17:10:59 -- accel/accel.sh@20 -- # read -r var val 00:08:02.340 17:10:59 -- accel/accel.sh@21 -- # val= 00:08:02.340 17:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.340 17:10:59 -- accel/accel.sh@20 -- # IFS=: 00:08:02.340 17:10:59 -- accel/accel.sh@20 -- # read -r var val 00:08:02.340 17:10:59 -- accel/accel.sh@21 -- # val= 00:08:02.340 17:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.340 17:10:59 -- accel/accel.sh@20 -- # IFS=: 00:08:02.340 17:10:59 -- accel/accel.sh@20 -- # read -r var val 00:08:02.340 17:10:59 -- accel/accel.sh@21 -- # val= 00:08:02.340 17:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.340 17:10:59 -- accel/accel.sh@20 -- # IFS=: 00:08:02.340 17:10:59 -- accel/accel.sh@20 -- # read -r var val 00:08:02.340 17:10:59 -- accel/accel.sh@21 -- # val= 00:08:02.340 17:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.340 17:10:59 -- accel/accel.sh@20 -- # IFS=: 00:08:02.340 17:10:59 -- accel/accel.sh@20 -- # read -r var val 00:08:02.340 17:10:59 -- accel/accel.sh@21 -- # val= 00:08:02.340 17:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.340 17:10:59 -- accel/accel.sh@20 -- # IFS=: 00:08:02.340 17:10:59 -- accel/accel.sh@20 -- # read -r var val 00:08:02.340 17:10:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:02.340 17:10:59 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:08:02.340 17:10:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.340 00:08:02.340 real 0m2.597s 00:08:02.340 user 0m2.354s 00:08:02.340 sys 0m0.242s 00:08:02.340 17:10:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.340 17:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:02.340 ************************************ 00:08:02.340 END TEST accel_dif_generate_copy 00:08:02.340 ************************************ 00:08:02.599 17:10:59 -- accel/accel.sh@107 -- # [[ y == y ]] 00:08:02.599 17:10:59 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:02.599 17:10:59 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:02.599 17:10:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.599 17:10:59 -- common/autotest_common.sh@10 -- # set +x 00:08:02.599 ************************************ 00:08:02.599 START TEST accel_comp 00:08:02.599 ************************************ 00:08:02.599 17:10:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:02.599 17:10:59 -- accel/accel.sh@16 -- # local accel_opc 00:08:02.599 17:10:59 -- accel/accel.sh@17 -- # local accel_module 00:08:02.599 17:10:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:02.599 17:10:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:02.599 17:10:59 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.599 17:10:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.599 17:10:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.599 17:10:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.599 17:10:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.599 17:10:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.599 17:10:59 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.599 17:10:59 -- accel/accel.sh@42 -- # jq -r . 00:08:02.599 [2024-12-14 17:10:59.088476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.599 [2024-12-14 17:10:59.088550] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204030 ] 00:08:02.599 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.599 [2024-12-14 17:10:59.158373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.599 [2024-12-14 17:10:59.193767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.976 17:11:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:03.976 00:08:03.976 SPDK Configuration: 00:08:03.976 Core mask: 0x1 00:08:03.976 00:08:03.976 Accel Perf Configuration: 00:08:03.976 Workload Type: compress 00:08:03.976 Transfer size: 4096 bytes 00:08:03.976 Vector count 1 00:08:03.976 Module: software 00:08:03.976 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:03.976 Queue depth: 32 00:08:03.976 Allocate depth: 32 00:08:03.976 # threads/core: 1 00:08:03.976 Run time: 1 seconds 00:08:03.976 Verify: No 00:08:03.976 00:08:03.976 Running for 1 seconds... 00:08:03.976 00:08:03.976 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:03.976 ------------------------------------------------------------------------------------ 00:08:03.976 0,0 64320/s 268 MiB/s 0 0 00:08:03.976 ==================================================================================== 00:08:03.976 Total 64320/s 251 MiB/s 0 0' 00:08:03.976 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.976 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.976 17:11:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:03.976 17:11:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:03.976 17:11:00 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.976 17:11:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:03.976 17:11:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.976 17:11:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.976 17:11:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:03.976 17:11:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:03.976 17:11:00 -- accel/accel.sh@41 -- # local IFS=, 00:08:03.976 17:11:00 -- accel/accel.sh@42 -- # jq -r . 00:08:03.976 [2024-12-14 17:11:00.390991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:03.976 [2024-12-14 17:11:00.391065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204210 ] 00:08:03.976 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.976 [2024-12-14 17:11:00.461882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.976 [2024-12-14 17:11:00.497204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.976 17:11:00 -- accel/accel.sh@21 -- # val= 00:08:03.976 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.976 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.976 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.976 17:11:00 -- accel/accel.sh@21 -- # val= 00:08:03.976 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.976 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.976 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.976 17:11:00 -- accel/accel.sh@21 -- # val= 00:08:03.976 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val=0x1 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val= 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val= 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val=compress 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@24 -- # accel_opc=compress 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val= 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val=software 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@23 -- # accel_module=software 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val=32 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val=32 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val=1 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val=No 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val= 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:03.977 17:11:00 -- accel/accel.sh@21 -- # val= 00:08:03.977 17:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # IFS=: 00:08:03.977 17:11:00 -- accel/accel.sh@20 -- # read -r var val 00:08:05.353 17:11:01 -- accel/accel.sh@21 -- # val= 00:08:05.353 17:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.353 17:11:01 -- accel/accel.sh@20 -- # IFS=: 00:08:05.353 17:11:01 -- accel/accel.sh@20 -- # read -r var val 00:08:05.353 17:11:01 -- accel/accel.sh@21 -- # val= 00:08:05.353 17:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.353 17:11:01 -- accel/accel.sh@20 -- # IFS=: 00:08:05.353 17:11:01 -- accel/accel.sh@20 -- # read -r var val 00:08:05.353 17:11:01 -- accel/accel.sh@21 -- # val= 00:08:05.353 17:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.353 17:11:01 -- accel/accel.sh@20 -- # IFS=: 00:08:05.353 17:11:01 -- accel/accel.sh@20 -- # read -r var val 00:08:05.353 17:11:01 -- accel/accel.sh@21 -- # val= 00:08:05.353 17:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.353 17:11:01 -- accel/accel.sh@20 -- # IFS=: 00:08:05.353 17:11:01 -- accel/accel.sh@20 -- # read -r var val 00:08:05.353 17:11:01 -- accel/accel.sh@21 -- # val= 00:08:05.353 17:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.353 17:11:01 -- accel/accel.sh@20 -- # IFS=: 00:08:05.353 17:11:01 -- accel/accel.sh@20 -- # read -r var val 00:08:05.353 17:11:01 -- accel/accel.sh@21 -- # val= 00:08:05.353 17:11:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.353 17:11:01 -- accel/accel.sh@20 -- # IFS=: 00:08:05.353 17:11:01 -- accel/accel.sh@20 -- # read -r var val 00:08:05.353 17:11:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:05.353 17:11:01 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:08:05.353 17:11:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.353 00:08:05.353 real 0m2.604s 00:08:05.353 user 0m2.338s 00:08:05.353 sys 0m0.264s 00:08:05.353 17:11:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.353 17:11:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.353 ************************************ 00:08:05.353 END TEST accel_comp 00:08:05.353 ************************************ 00:08:05.353 17:11:01 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:05.353 17:11:01 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:05.353 17:11:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.353 17:11:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.353 ************************************ 00:08:05.353 START TEST accel_decomp 00:08:05.353 ************************************ 00:08:05.353 17:11:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:05.353 17:11:01 -- accel/accel.sh@16 -- # local accel_opc 00:08:05.353 17:11:01 -- accel/accel.sh@17 -- # local accel_module 00:08:05.353 17:11:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:05.353 17:11:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:05.353 17:11:01 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.353 17:11:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.353 17:11:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.353 17:11:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.353 17:11:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.353 17:11:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.353 17:11:01 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.353 17:11:01 -- accel/accel.sh@42 -- # jq -r . 00:08:05.353 [2024-12-14 17:11:01.736858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:05.353 [2024-12-14 17:11:01.736927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204493 ] 00:08:05.353 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.353 [2024-12-14 17:11:01.806087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.353 [2024-12-14 17:11:01.840874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.731 17:11:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:06.731 00:08:06.731 SPDK Configuration: 00:08:06.731 Core mask: 0x1 00:08:06.731 00:08:06.731 Accel Perf Configuration: 00:08:06.731 Workload Type: decompress 00:08:06.731 Transfer size: 4096 bytes 00:08:06.731 Vector count 1 00:08:06.731 Module: software 00:08:06.731 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:06.731 Queue depth: 32 00:08:06.731 Allocate depth: 32 00:08:06.731 # threads/core: 1 00:08:06.731 Run time: 1 seconds 00:08:06.731 Verify: Yes 00:08:06.731 00:08:06.731 Running for 1 seconds... 00:08:06.731 00:08:06.731 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:06.731 ------------------------------------------------------------------------------------ 00:08:06.731 0,0 86272/s 158 MiB/s 0 0 00:08:06.731 ==================================================================================== 00:08:06.731 Total 86272/s 337 MiB/s 0 0' 00:08:06.731 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.731 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.731 17:11:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:06.731 17:11:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:06.731 17:11:03 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.731 17:11:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.731 17:11:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.731 17:11:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.731 17:11:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.731 17:11:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.731 17:11:03 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.731 17:11:03 -- accel/accel.sh@42 -- # jq -r . 00:08:06.731 [2024-12-14 17:11:03.035603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:06.731 [2024-12-14 17:11:03.035681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204765 ] 00:08:06.731 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.731 [2024-12-14 17:11:03.105284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.731 [2024-12-14 17:11:03.139623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.731 17:11:03 -- accel/accel.sh@21 -- # val= 00:08:06.731 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.731 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.731 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.731 17:11:03 -- accel/accel.sh@21 -- # val= 00:08:06.731 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.731 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val= 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val=0x1 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val= 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val= 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val=decompress 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val= 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val=software 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@23 -- # accel_module=software 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val=32 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val=32 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val=1 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val=Yes 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val= 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:06.732 17:11:03 -- accel/accel.sh@21 -- # val= 00:08:06.732 17:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # IFS=: 00:08:06.732 17:11:03 -- accel/accel.sh@20 -- # read -r var val 00:08:07.670 17:11:04 -- accel/accel.sh@21 -- # val= 00:08:07.670 17:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.670 17:11:04 -- accel/accel.sh@20 -- # IFS=: 00:08:07.670 17:11:04 -- accel/accel.sh@20 -- # read -r var val 00:08:07.670 17:11:04 -- accel/accel.sh@21 -- # val= 00:08:07.670 17:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.670 17:11:04 -- accel/accel.sh@20 -- # IFS=: 00:08:07.670 17:11:04 -- accel/accel.sh@20 -- # read -r var val 00:08:07.670 17:11:04 -- accel/accel.sh@21 -- # val= 00:08:07.670 17:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.670 17:11:04 -- accel/accel.sh@20 -- # IFS=: 00:08:07.670 17:11:04 -- accel/accel.sh@20 -- # read -r var val 00:08:07.670 17:11:04 -- accel/accel.sh@21 -- # val= 00:08:07.670 17:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.670 17:11:04 -- accel/accel.sh@20 -- # IFS=: 00:08:07.670 17:11:04 -- accel/accel.sh@20 -- # read -r var val 00:08:07.670 17:11:04 -- accel/accel.sh@21 -- # val= 00:08:07.670 17:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.670 17:11:04 -- accel/accel.sh@20 -- # IFS=: 00:08:07.670 17:11:04 -- accel/accel.sh@20 -- # read -r var val 00:08:07.670 17:11:04 -- accel/accel.sh@21 -- # val= 00:08:07.670 17:11:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.670 17:11:04 -- accel/accel.sh@20 -- # IFS=: 00:08:07.670 17:11:04 -- accel/accel.sh@20 -- # read -r var val 00:08:07.670 17:11:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:07.670 17:11:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:07.670 17:11:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.670 00:08:07.670 real 0m2.600s 00:08:07.670 user 0m2.347s 00:08:07.670 sys 0m0.251s 00:08:07.670 17:11:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.670 17:11:04 -- common/autotest_common.sh@10 -- # set +x 00:08:07.670 ************************************ 00:08:07.670 END TEST accel_decomp 00:08:07.670 ************************************ 00:08:07.670 17:11:04 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:07.670 17:11:04 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:07.670 17:11:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.670 17:11:04 -- common/autotest_common.sh@10 -- # set +x 00:08:07.670 ************************************ 00:08:07.670 START TEST accel_decmop_full 00:08:07.670 ************************************ 00:08:07.670 17:11:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:07.670 17:11:04 -- accel/accel.sh@16 -- # local accel_opc 00:08:07.670 17:11:04 -- accel/accel.sh@17 -- # local accel_module 00:08:07.929 17:11:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:07.929 17:11:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:07.929 17:11:04 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.929 17:11:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.929 17:11:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.929 17:11:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.929 17:11:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.929 17:11:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.929 17:11:04 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.929 17:11:04 -- accel/accel.sh@42 -- # jq -r . 00:08:07.929 [2024-12-14 17:11:04.377366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:07.929 [2024-12-14 17:11:04.377436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205050 ] 00:08:07.929 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.929 [2024-12-14 17:11:04.447398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.929 [2024-12-14 17:11:04.482548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.305 17:11:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:09.305 00:08:09.305 SPDK Configuration: 00:08:09.305 Core mask: 0x1 00:08:09.305 00:08:09.305 Accel Perf Configuration: 00:08:09.305 Workload Type: decompress 00:08:09.305 Transfer size: 111250 bytes 00:08:09.305 Vector count 1 00:08:09.305 Module: software 00:08:09.305 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:09.305 Queue depth: 32 00:08:09.305 Allocate depth: 32 00:08:09.305 # threads/core: 1 00:08:09.305 Run time: 1 seconds 00:08:09.305 Verify: Yes 00:08:09.305 00:08:09.305 Running for 1 seconds... 00:08:09.305 00:08:09.305 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:09.305 ------------------------------------------------------------------------------------ 00:08:09.305 0,0 5664/s 233 MiB/s 0 0 00:08:09.305 ==================================================================================== 00:08:09.305 Total 5664/s 600 MiB/s 0 0' 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:09.305 17:11:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:09.305 17:11:05 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.305 17:11:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.305 17:11:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.305 17:11:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.305 17:11:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.305 17:11:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.305 17:11:05 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.305 17:11:05 -- accel/accel.sh@42 -- # jq -r . 00:08:09.305 [2024-12-14 17:11:05.682733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.305 [2024-12-14 17:11:05.682805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205319 ] 00:08:09.305 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.305 [2024-12-14 17:11:05.751347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.305 [2024-12-14 17:11:05.785562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val= 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val= 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val= 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val=0x1 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val= 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val= 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val=decompress 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val= 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val=software 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@23 -- # accel_module=software 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val=32 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val=32 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val=1 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val=Yes 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val= 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 17:11:05 -- accel/accel.sh@21 -- # val= 00:08:09.305 17:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 17:11:05 -- accel/accel.sh@20 -- # read -r var val 00:08:10.727 17:11:06 -- accel/accel.sh@21 -- # val= 00:08:10.727 17:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.727 17:11:06 -- accel/accel.sh@20 -- # IFS=: 00:08:10.727 17:11:06 -- accel/accel.sh@20 -- # read -r var val 00:08:10.727 17:11:06 -- accel/accel.sh@21 -- # val= 00:08:10.727 17:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.727 17:11:06 -- accel/accel.sh@20 -- # IFS=: 00:08:10.727 17:11:06 -- accel/accel.sh@20 -- # read -r var val 00:08:10.727 17:11:06 -- accel/accel.sh@21 -- # val= 00:08:10.727 17:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.727 17:11:06 -- accel/accel.sh@20 -- # IFS=: 00:08:10.727 17:11:06 -- accel/accel.sh@20 -- # read -r var val 00:08:10.727 17:11:06 -- accel/accel.sh@21 -- # val= 00:08:10.727 17:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.727 17:11:06 -- accel/accel.sh@20 -- # IFS=: 00:08:10.727 17:11:06 -- accel/accel.sh@20 -- # read -r var val 00:08:10.727 17:11:06 -- accel/accel.sh@21 -- # val= 00:08:10.727 17:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.728 17:11:06 -- accel/accel.sh@20 -- # IFS=: 00:08:10.728 17:11:06 -- accel/accel.sh@20 -- # read -r var val 00:08:10.728 17:11:06 -- accel/accel.sh@21 -- # val= 00:08:10.728 17:11:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.728 17:11:06 -- accel/accel.sh@20 -- # IFS=: 00:08:10.728 17:11:06 -- accel/accel.sh@20 -- # read -r var val 00:08:10.728 17:11:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:10.728 17:11:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:10.728 17:11:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.728 00:08:10.728 real 0m2.609s 00:08:10.728 user 0m2.356s 00:08:10.728 sys 0m0.251s 00:08:10.728 17:11:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.728 17:11:06 -- common/autotest_common.sh@10 -- # set +x 00:08:10.728 ************************************ 00:08:10.728 END TEST accel_decmop_full 00:08:10.728 ************************************ 00:08:10.728 17:11:06 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:10.728 17:11:06 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:10.728 17:11:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.728 17:11:06 -- common/autotest_common.sh@10 -- # set +x 00:08:10.728 ************************************ 00:08:10.728 START TEST accel_decomp_mcore 00:08:10.728 ************************************ 00:08:10.728 17:11:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:10.728 17:11:07 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.728 17:11:07 -- accel/accel.sh@17 -- # local accel_module 00:08:10.728 17:11:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:10.728 17:11:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:10.728 17:11:07 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.728 17:11:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.728 17:11:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.728 17:11:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.728 17:11:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.728 17:11:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.728 17:11:07 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.728 17:11:07 -- accel/accel.sh@42 -- # jq -r . 00:08:10.728 [2024-12-14 17:11:07.033917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:10.728 [2024-12-14 17:11:07.034011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205560 ] 00:08:10.728 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.728 [2024-12-14 17:11:07.106120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.728 [2024-12-14 17:11:07.144238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.728 [2024-12-14 17:11:07.144332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.728 [2024-12-14 17:11:07.144393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.728 [2024-12-14 17:11:07.144395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.728 17:11:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:11.728 00:08:11.728 SPDK Configuration: 00:08:11.728 Core mask: 0xf 00:08:11.728 00:08:11.728 Accel Perf Configuration: 00:08:11.728 Workload Type: decompress 00:08:11.728 Transfer size: 4096 bytes 00:08:11.728 Vector count 1 00:08:11.728 Module: software 00:08:11.728 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:11.728 Queue depth: 32 00:08:11.728 Allocate depth: 32 00:08:11.728 # threads/core: 1 00:08:11.728 Run time: 1 seconds 00:08:11.728 Verify: Yes 00:08:11.728 00:08:11.728 Running for 1 seconds... 00:08:11.728 00:08:11.728 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:11.728 ------------------------------------------------------------------------------------ 00:08:11.728 0,0 73376/s 135 MiB/s 0 0 00:08:11.728 3,0 74304/s 136 MiB/s 0 0 00:08:11.728 2,0 73728/s 135 MiB/s 0 0 00:08:11.728 1,0 73728/s 135 MiB/s 0 0 00:08:11.728 ==================================================================================== 00:08:11.728 Total 295136/s 1152 MiB/s 0 0' 00:08:11.728 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.728 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.728 17:11:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.728 17:11:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.728 17:11:08 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.728 17:11:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.728 17:11:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.728 17:11:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.728 17:11:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.728 17:11:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.728 17:11:08 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.728 17:11:08 -- accel/accel.sh@42 -- # jq -r . 00:08:11.728 [2024-12-14 17:11:08.346648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:11.728 [2024-12-14 17:11:08.346732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205727 ] 00:08:11.728 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.987 [2024-12-14 17:11:08.418626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.987 [2024-12-14 17:11:08.456138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.987 [2024-12-14 17:11:08.456236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.987 [2024-12-14 17:11:08.456320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.987 [2024-12-14 17:11:08.456322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val= 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val= 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val= 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val=0xf 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val= 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val= 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val=decompress 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val= 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val=software 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@23 -- # accel_module=software 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val=32 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val=32 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val=1 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val=Yes 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val= 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:11.987 17:11:08 -- accel/accel.sh@21 -- # val= 00:08:11.987 17:11:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # IFS=: 00:08:11.987 17:11:08 -- accel/accel.sh@20 -- # read -r var val 00:08:13.362 17:11:09 -- accel/accel.sh@21 -- # val= 00:08:13.362 17:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # IFS=: 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # read -r var val 00:08:13.362 17:11:09 -- accel/accel.sh@21 -- # val= 00:08:13.362 17:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # IFS=: 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # read -r var val 00:08:13.362 17:11:09 -- accel/accel.sh@21 -- # val= 00:08:13.362 17:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # IFS=: 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # read -r var val 00:08:13.362 17:11:09 -- accel/accel.sh@21 -- # val= 00:08:13.362 17:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # IFS=: 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # read -r var val 00:08:13.362 17:11:09 -- accel/accel.sh@21 -- # val= 00:08:13.362 17:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # IFS=: 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # read -r var val 00:08:13.362 17:11:09 -- accel/accel.sh@21 -- # val= 00:08:13.362 17:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # IFS=: 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # read -r var val 00:08:13.362 17:11:09 -- accel/accel.sh@21 -- # val= 00:08:13.362 17:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # IFS=: 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # read -r var val 00:08:13.362 17:11:09 -- accel/accel.sh@21 -- # val= 00:08:13.362 17:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # IFS=: 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # read -r var val 00:08:13.362 17:11:09 -- accel/accel.sh@21 -- # val= 00:08:13.362 17:11:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # IFS=: 00:08:13.362 17:11:09 -- accel/accel.sh@20 -- # read -r var val 00:08:13.362 17:11:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:13.362 17:11:09 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:13.362 17:11:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.362 00:08:13.362 real 0m2.635s 00:08:13.362 user 0m9.024s 00:08:13.362 sys 0m0.277s 00:08:13.362 17:11:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.362 17:11:09 -- common/autotest_common.sh@10 -- # set +x 00:08:13.362 ************************************ 00:08:13.362 END TEST accel_decomp_mcore 00:08:13.362 ************************************ 00:08:13.362 17:11:09 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.362 17:11:09 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:13.362 17:11:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.362 17:11:09 -- common/autotest_common.sh@10 -- # set +x 00:08:13.362 ************************************ 00:08:13.362 START TEST accel_decomp_full_mcore 00:08:13.362 ************************************ 00:08:13.362 17:11:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.362 17:11:09 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.362 17:11:09 -- accel/accel.sh@17 -- # local accel_module 00:08:13.362 17:11:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.362 17:11:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.362 17:11:09 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.362 17:11:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.362 17:11:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.362 17:11:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.362 17:11:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.362 17:11:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.362 17:11:09 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.362 17:11:09 -- accel/accel.sh@42 -- # jq -r . 00:08:13.362 [2024-12-14 17:11:09.716590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:13.363 [2024-12-14 17:11:09.716662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205932 ] 00:08:13.363 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.363 [2024-12-14 17:11:09.788101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.363 [2024-12-14 17:11:09.826008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.363 [2024-12-14 17:11:09.826105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.363 [2024-12-14 17:11:09.826192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.363 [2024-12-14 17:11:09.826194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.739 17:11:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:14.739 00:08:14.739 SPDK Configuration: 00:08:14.739 Core mask: 0xf 00:08:14.739 00:08:14.739 Accel Perf Configuration: 00:08:14.739 Workload Type: decompress 00:08:14.739 Transfer size: 111250 bytes 00:08:14.739 Vector count 1 00:08:14.739 Module: software 00:08:14.739 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:14.739 Queue depth: 32 00:08:14.739 Allocate depth: 32 00:08:14.739 # threads/core: 1 00:08:14.739 Run time: 1 seconds 00:08:14.739 Verify: Yes 00:08:14.739 00:08:14.739 Running for 1 seconds... 00:08:14.739 00:08:14.739 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:14.739 ------------------------------------------------------------------------------------ 00:08:14.739 0,0 5664/s 233 MiB/s 0 0 00:08:14.739 3,0 5696/s 235 MiB/s 0 0 00:08:14.739 2,0 5696/s 235 MiB/s 0 0 00:08:14.739 1,0 5696/s 235 MiB/s 0 0 00:08:14.739 ==================================================================================== 00:08:14.739 Total 22752/s 2413 MiB/s 0 0' 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:14.739 17:11:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:14.739 17:11:11 -- accel/accel.sh@12 -- # build_accel_config 00:08:14.739 17:11:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:14.739 17:11:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.739 17:11:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.739 17:11:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:14.739 17:11:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:14.739 17:11:11 -- accel/accel.sh@41 -- # local IFS=, 00:08:14.739 17:11:11 -- accel/accel.sh@42 -- # jq -r . 00:08:14.739 [2024-12-14 17:11:11.039897] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:14.739 [2024-12-14 17:11:11.039969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206188 ] 00:08:14.739 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.739 [2024-12-14 17:11:11.109459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.739 [2024-12-14 17:11:11.146397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.739 [2024-12-14 17:11:11.146493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.739 [2024-12-14 17:11:11.146591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.739 [2024-12-14 17:11:11.146593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val= 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val= 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val= 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val=0xf 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val= 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val= 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val=decompress 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val= 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val=software 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@23 -- # accel_module=software 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val=32 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val=32 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val=1 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val=Yes 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val= 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:14.739 17:11:11 -- accel/accel.sh@21 -- # val= 00:08:14.739 17:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # IFS=: 00:08:14.739 17:11:11 -- accel/accel.sh@20 -- # read -r var val 00:08:15.675 17:11:12 -- accel/accel.sh@21 -- # val= 00:08:15.675 17:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # IFS=: 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # read -r var val 00:08:15.675 17:11:12 -- accel/accel.sh@21 -- # val= 00:08:15.675 17:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # IFS=: 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # read -r var val 00:08:15.675 17:11:12 -- accel/accel.sh@21 -- # val= 00:08:15.675 17:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # IFS=: 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # read -r var val 00:08:15.675 17:11:12 -- accel/accel.sh@21 -- # val= 00:08:15.675 17:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # IFS=: 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # read -r var val 00:08:15.675 17:11:12 -- accel/accel.sh@21 -- # val= 00:08:15.675 17:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # IFS=: 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # read -r var val 00:08:15.675 17:11:12 -- accel/accel.sh@21 -- # val= 00:08:15.675 17:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # IFS=: 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # read -r var val 00:08:15.675 17:11:12 -- accel/accel.sh@21 -- # val= 00:08:15.675 17:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # IFS=: 00:08:15.675 17:11:12 -- accel/accel.sh@20 -- # read -r var val 00:08:15.675 17:11:12 -- accel/accel.sh@21 -- # val= 00:08:15.676 17:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.676 17:11:12 -- accel/accel.sh@20 -- # IFS=: 00:08:15.676 17:11:12 -- accel/accel.sh@20 -- # read -r var val 00:08:15.676 17:11:12 -- accel/accel.sh@21 -- # val= 00:08:15.676 17:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.676 17:11:12 -- accel/accel.sh@20 -- # IFS=: 00:08:15.676 17:11:12 -- accel/accel.sh@20 -- # read -r var val 00:08:15.676 17:11:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:15.676 17:11:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:15.676 17:11:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.676 00:08:15.676 real 0m2.648s 00:08:15.676 user 0m9.078s 00:08:15.676 sys 0m0.282s 00:08:15.676 17:11:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.676 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:08:15.676 ************************************ 00:08:15.676 END TEST accel_decomp_full_mcore 00:08:15.676 ************************************ 00:08:15.935 17:11:12 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.935 17:11:12 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:15.935 17:11:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.935 17:11:12 -- common/autotest_common.sh@10 -- # set +x 00:08:15.935 ************************************ 00:08:15.935 START TEST accel_decomp_mthread 00:08:15.935 ************************************ 00:08:15.935 17:11:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.935 17:11:12 -- accel/accel.sh@16 -- # local accel_opc 00:08:15.935 17:11:12 -- accel/accel.sh@17 -- # local accel_module 00:08:15.935 17:11:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.935 17:11:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.935 17:11:12 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.935 17:11:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:15.935 17:11:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.935 17:11:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.935 17:11:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:15.935 17:11:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:15.935 17:11:12 -- accel/accel.sh@41 -- # local IFS=, 00:08:15.935 17:11:12 -- accel/accel.sh@42 -- # jq -r . 00:08:15.935 [2024-12-14 17:11:12.406964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:15.935 [2024-12-14 17:11:12.407038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206481 ] 00:08:15.935 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.935 [2024-12-14 17:11:12.476417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.935 [2024-12-14 17:11:12.511746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.310 17:11:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:17.310 00:08:17.310 SPDK Configuration: 00:08:17.310 Core mask: 0x1 00:08:17.310 00:08:17.310 Accel Perf Configuration: 00:08:17.310 Workload Type: decompress 00:08:17.310 Transfer size: 4096 bytes 00:08:17.310 Vector count 1 00:08:17.310 Module: software 00:08:17.310 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:17.310 Queue depth: 32 00:08:17.310 Allocate depth: 32 00:08:17.310 # threads/core: 2 00:08:17.310 Run time: 1 seconds 00:08:17.310 Verify: Yes 00:08:17.310 00:08:17.310 Running for 1 seconds... 00:08:17.310 00:08:17.310 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:17.310 ------------------------------------------------------------------------------------ 00:08:17.310 0,1 43520/s 80 MiB/s 0 0 00:08:17.310 0,0 43424/s 80 MiB/s 0 0 00:08:17.310 ==================================================================================== 00:08:17.310 Total 86944/s 339 MiB/s 0 0' 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.310 17:11:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:17.310 17:11:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:17.310 17:11:13 -- accel/accel.sh@12 -- # build_accel_config 00:08:17.310 17:11:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:17.310 17:11:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.310 17:11:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.310 17:11:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:17.310 17:11:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:17.310 17:11:13 -- accel/accel.sh@41 -- # local IFS=, 00:08:17.310 17:11:13 -- accel/accel.sh@42 -- # jq -r . 00:08:17.310 [2024-12-14 17:11:13.709969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:17.310 [2024-12-14 17:11:13.710042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206751 ] 00:08:17.310 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.310 [2024-12-14 17:11:13.779896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.310 [2024-12-14 17:11:13.813793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.310 17:11:13 -- accel/accel.sh@21 -- # val= 00:08:17.310 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.310 17:11:13 -- accel/accel.sh@21 -- # val= 00:08:17.310 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.310 17:11:13 -- accel/accel.sh@21 -- # val= 00:08:17.310 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.310 17:11:13 -- accel/accel.sh@21 -- # val=0x1 00:08:17.310 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.310 17:11:13 -- accel/accel.sh@21 -- # val= 00:08:17.310 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.310 17:11:13 -- accel/accel.sh@21 -- # val= 00:08:17.310 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.310 17:11:13 -- accel/accel.sh@21 -- # val=decompress 00:08:17.310 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.310 17:11:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.310 17:11:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:17.310 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.310 17:11:13 -- accel/accel.sh@21 -- # val= 00:08:17.310 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.310 17:11:13 -- accel/accel.sh@21 -- # val=software 00:08:17.310 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.310 17:11:13 -- accel/accel.sh@23 -- # accel_module=software 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.310 17:11:13 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:17.310 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.310 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.310 17:11:13 -- accel/accel.sh@21 -- # val=32 00:08:17.310 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.311 17:11:13 -- accel/accel.sh@21 -- # val=32 00:08:17.311 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.311 17:11:13 -- accel/accel.sh@21 -- # val=2 00:08:17.311 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.311 17:11:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:17.311 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.311 17:11:13 -- accel/accel.sh@21 -- # val=Yes 00:08:17.311 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.311 17:11:13 -- accel/accel.sh@21 -- # val= 00:08:17.311 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:17.311 17:11:13 -- accel/accel.sh@21 -- # val= 00:08:17.311 17:11:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # IFS=: 00:08:17.311 17:11:13 -- accel/accel.sh@20 -- # read -r var val 00:08:18.686 17:11:14 -- accel/accel.sh@21 -- # val= 00:08:18.686 17:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.686 17:11:14 -- accel/accel.sh@20 -- # IFS=: 00:08:18.686 17:11:14 -- accel/accel.sh@20 -- # read -r var val 00:08:18.686 17:11:14 -- accel/accel.sh@21 -- # val= 00:08:18.686 17:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.686 17:11:14 -- accel/accel.sh@20 -- # IFS=: 00:08:18.686 17:11:14 -- accel/accel.sh@20 -- # read -r var val 00:08:18.686 17:11:14 -- accel/accel.sh@21 -- # val= 00:08:18.686 17:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.686 17:11:14 -- accel/accel.sh@20 -- # IFS=: 00:08:18.686 17:11:14 -- accel/accel.sh@20 -- # read -r var val 00:08:18.686 17:11:14 -- accel/accel.sh@21 -- # val= 00:08:18.686 17:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.686 17:11:14 -- accel/accel.sh@20 -- # IFS=: 00:08:18.686 17:11:14 -- accel/accel.sh@20 -- # read -r var val 00:08:18.686 17:11:14 -- accel/accel.sh@21 -- # val= 00:08:18.686 17:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.686 17:11:14 -- accel/accel.sh@20 -- # IFS=: 00:08:18.686 17:11:14 -- accel/accel.sh@20 -- # read -r var val 00:08:18.686 17:11:14 -- accel/accel.sh@21 -- # val= 00:08:18.686 17:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.686 17:11:14 -- accel/accel.sh@20 -- # IFS=: 00:08:18.686 17:11:14 -- accel/accel.sh@20 -- # read -r var val 00:08:18.686 17:11:14 -- accel/accel.sh@21 -- # val= 00:08:18.686 17:11:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.687 17:11:14 -- accel/accel.sh@20 -- # IFS=: 00:08:18.687 17:11:14 -- accel/accel.sh@20 -- # read -r var val 00:08:18.687 17:11:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:18.687 17:11:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:18.687 17:11:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.687 00:08:18.687 real 0m2.609s 00:08:18.687 user 0m2.358s 00:08:18.687 sys 0m0.259s 00:08:18.687 17:11:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.687 17:11:14 -- common/autotest_common.sh@10 -- # set +x 00:08:18.687 ************************************ 00:08:18.687 END TEST accel_decomp_mthread 00:08:18.687 ************************************ 00:08:18.687 17:11:15 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.687 17:11:15 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:18.687 17:11:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.687 17:11:15 -- common/autotest_common.sh@10 -- # set +x 00:08:18.687 ************************************ 00:08:18.687 START TEST accel_deomp_full_mthread 00:08:18.687 ************************************ 00:08:18.687 17:11:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.687 17:11:15 -- accel/accel.sh@16 -- # local accel_opc 00:08:18.687 17:11:15 -- accel/accel.sh@17 -- # local accel_module 00:08:18.687 17:11:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.687 17:11:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.687 17:11:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:18.687 17:11:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:18.687 17:11:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.687 17:11:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.687 17:11:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:18.687 17:11:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:18.687 17:11:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:18.687 17:11:15 -- accel/accel.sh@42 -- # jq -r . 00:08:18.687 [2024-12-14 17:11:15.066621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:18.687 [2024-12-14 17:11:15.066699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207032 ] 00:08:18.687 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.687 [2024-12-14 17:11:15.135822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.687 [2024-12-14 17:11:15.169445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.059 17:11:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:20.059 00:08:20.059 SPDK Configuration: 00:08:20.059 Core mask: 0x1 00:08:20.059 00:08:20.059 Accel Perf Configuration: 00:08:20.059 Workload Type: decompress 00:08:20.059 Transfer size: 111250 bytes 00:08:20.059 Vector count 1 00:08:20.059 Module: software 00:08:20.059 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:20.059 Queue depth: 32 00:08:20.059 Allocate depth: 32 00:08:20.059 # threads/core: 2 00:08:20.059 Run time: 1 seconds 00:08:20.059 Verify: Yes 00:08:20.059 00:08:20.059 Running for 1 seconds... 00:08:20.059 00:08:20.060 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:20.060 ------------------------------------------------------------------------------------ 00:08:20.060 0,1 2880/s 118 MiB/s 0 0 00:08:20.060 0,0 2880/s 118 MiB/s 0 0 00:08:20.060 ==================================================================================== 00:08:20.060 Total 5760/s 611 MiB/s 0 0' 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:20.060 17:11:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:20.060 17:11:16 -- accel/accel.sh@12 -- # build_accel_config 00:08:20.060 17:11:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:20.060 17:11:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.060 17:11:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.060 17:11:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:20.060 17:11:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:20.060 17:11:16 -- accel/accel.sh@41 -- # local IFS=, 00:08:20.060 17:11:16 -- accel/accel.sh@42 -- # jq -r . 00:08:20.060 [2024-12-14 17:11:16.385472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:20.060 [2024-12-14 17:11:16.385566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207273 ] 00:08:20.060 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.060 [2024-12-14 17:11:16.456392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.060 [2024-12-14 17:11:16.490646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val= 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val= 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val= 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val=0x1 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val= 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val= 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val=decompress 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val= 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val=software 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@23 -- # accel_module=software 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val=32 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val=32 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val=2 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val=Yes 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val= 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:20.060 17:11:16 -- accel/accel.sh@21 -- # val= 00:08:20.060 17:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # IFS=: 00:08:20.060 17:11:16 -- accel/accel.sh@20 -- # read -r var val 00:08:21.435 17:11:17 -- accel/accel.sh@21 -- # val= 00:08:21.435 17:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.435 17:11:17 -- accel/accel.sh@20 -- # IFS=: 00:08:21.435 17:11:17 -- accel/accel.sh@20 -- # read -r var val 00:08:21.435 17:11:17 -- accel/accel.sh@21 -- # val= 00:08:21.435 17:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.435 17:11:17 -- accel/accel.sh@20 -- # IFS=: 00:08:21.435 17:11:17 -- accel/accel.sh@20 -- # read -r var val 00:08:21.435 17:11:17 -- accel/accel.sh@21 -- # val= 00:08:21.435 17:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.435 17:11:17 -- accel/accel.sh@20 -- # IFS=: 00:08:21.435 17:11:17 -- accel/accel.sh@20 -- # read -r var val 00:08:21.435 17:11:17 -- accel/accel.sh@21 -- # val= 00:08:21.435 17:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.435 17:11:17 -- accel/accel.sh@20 -- # IFS=: 00:08:21.435 17:11:17 -- accel/accel.sh@20 -- # read -r var val 00:08:21.436 17:11:17 -- accel/accel.sh@21 -- # val= 00:08:21.436 17:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.436 17:11:17 -- accel/accel.sh@20 -- # IFS=: 00:08:21.436 17:11:17 -- accel/accel.sh@20 -- # read -r var val 00:08:21.436 17:11:17 -- accel/accel.sh@21 -- # val= 00:08:21.436 17:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.436 17:11:17 -- accel/accel.sh@20 -- # IFS=: 00:08:21.436 17:11:17 -- accel/accel.sh@20 -- # read -r var val 00:08:21.436 17:11:17 -- accel/accel.sh@21 -- # val= 00:08:21.436 17:11:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.436 17:11:17 -- accel/accel.sh@20 -- # IFS=: 00:08:21.436 17:11:17 -- accel/accel.sh@20 -- # read -r var val 00:08:21.436 17:11:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:21.436 17:11:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:21.436 17:11:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.436 00:08:21.436 real 0m2.651s 00:08:21.436 user 0m2.395s 00:08:21.436 sys 0m0.264s 00:08:21.436 17:11:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.436 17:11:17 -- common/autotest_common.sh@10 -- # set +x 00:08:21.436 ************************************ 00:08:21.436 END TEST accel_deomp_full_mthread 00:08:21.436 ************************************ 00:08:21.436 17:11:17 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:21.436 17:11:17 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:21.436 17:11:17 -- accel/accel.sh@129 -- # build_accel_config 00:08:21.436 17:11:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:21.436 17:11:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.436 17:11:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:21.436 17:11:17 -- common/autotest_common.sh@10 -- # set +x 00:08:21.436 17:11:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.436 17:11:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.436 17:11:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:21.436 17:11:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:21.436 17:11:17 -- accel/accel.sh@41 -- # local IFS=, 00:08:21.436 17:11:17 -- accel/accel.sh@42 -- # jq -r . 00:08:21.436 ************************************ 00:08:21.436 START TEST accel_dif_functional_tests 00:08:21.436 ************************************ 00:08:21.436 17:11:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:21.436 [2024-12-14 17:11:17.783286] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:21.436 [2024-12-14 17:11:17.783344] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207474 ] 00:08:21.436 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.436 [2024-12-14 17:11:17.852441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.436 [2024-12-14 17:11:17.890080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.436 [2024-12-14 17:11:17.890176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.436 [2024-12-14 17:11:17.890178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.436 00:08:21.436 00:08:21.436 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.436 http://cunit.sourceforge.net/ 00:08:21.436 00:08:21.436 00:08:21.436 Suite: accel_dif 00:08:21.436 Test: verify: DIF generated, GUARD check ...passed 00:08:21.436 Test: verify: DIF generated, APPTAG check ...passed 00:08:21.436 Test: verify: DIF generated, REFTAG check ...passed 00:08:21.436 Test: verify: DIF not generated, GUARD check ...[2024-12-14 17:11:17.953797] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:21.436 [2024-12-14 17:11:17.953846] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:21.436 passed 00:08:21.436 Test: verify: DIF not generated, APPTAG check ...[2024-12-14 17:11:17.953877] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:21.436 [2024-12-14 17:11:17.953894] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:21.436 passed 00:08:21.436 Test: verify: DIF not generated, REFTAG check ...[2024-12-14 17:11:17.953914] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:21.436 [2024-12-14 17:11:17.953930] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:21.436 passed 00:08:21.436 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:21.436 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-14 17:11:17.953973] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:21.436 passed 00:08:21.436 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:21.436 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:21.436 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:21.436 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-14 17:11:17.954079] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:21.436 passed 00:08:21.436 Test: generate copy: DIF generated, GUARD check ...passed 00:08:21.436 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:21.436 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:21.436 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:21.436 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:21.436 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:21.436 Test: generate copy: iovecs-len validate ...[2024-12-14 17:11:17.954254] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:21.436 passed 00:08:21.436 Test: generate copy: buffer alignment validate ...passed 00:08:21.436 00:08:21.436 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.436 suites 1 1 n/a 0 0 00:08:21.436 tests 20 20 20 0 0 00:08:21.436 asserts 204 204 204 0 n/a 00:08:21.436 00:08:21.436 Elapsed time = 0.000 seconds 00:08:21.436 00:08:21.436 real 0m0.370s 00:08:21.436 user 0m0.545s 00:08:21.436 sys 0m0.165s 00:08:21.436 17:11:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.436 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:08:21.436 ************************************ 00:08:21.436 END TEST accel_dif_functional_tests 00:08:21.436 ************************************ 00:08:21.695 00:08:21.695 real 0m55.603s 00:08:21.695 user 1m3.143s 00:08:21.695 sys 0m6.977s 00:08:21.695 17:11:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.695 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:08:21.695 ************************************ 00:08:21.695 END TEST accel 00:08:21.695 ************************************ 00:08:21.695 17:11:18 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:21.695 17:11:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:21.695 17:11:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.695 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:08:21.695 ************************************ 00:08:21.695 START TEST accel_rpc 00:08:21.695 ************************************ 00:08:21.695 17:11:18 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:21.695 * Looking for test storage... 00:08:21.695 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:08:21.695 17:11:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:21.695 17:11:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:21.695 17:11:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:21.695 17:11:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:21.695 17:11:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:21.695 17:11:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:21.695 17:11:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:21.695 17:11:18 -- scripts/common.sh@335 -- # IFS=.-: 00:08:21.695 17:11:18 -- scripts/common.sh@335 -- # read -ra ver1 00:08:21.695 17:11:18 -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.695 17:11:18 -- scripts/common.sh@336 -- # read -ra ver2 00:08:21.695 17:11:18 -- scripts/common.sh@337 -- # local 'op=<' 00:08:21.695 17:11:18 -- scripts/common.sh@339 -- # ver1_l=2 00:08:21.695 17:11:18 -- scripts/common.sh@340 -- # ver2_l=1 00:08:21.695 17:11:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:21.695 17:11:18 -- scripts/common.sh@343 -- # case "$op" in 00:08:21.695 17:11:18 -- scripts/common.sh@344 -- # : 1 00:08:21.695 17:11:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:21.695 17:11:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.695 17:11:18 -- scripts/common.sh@364 -- # decimal 1 00:08:21.695 17:11:18 -- scripts/common.sh@352 -- # local d=1 00:08:21.695 17:11:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.954 17:11:18 -- scripts/common.sh@354 -- # echo 1 00:08:21.954 17:11:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:21.954 17:11:18 -- scripts/common.sh@365 -- # decimal 2 00:08:21.954 17:11:18 -- scripts/common.sh@352 -- # local d=2 00:08:21.954 17:11:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.954 17:11:18 -- scripts/common.sh@354 -- # echo 2 00:08:21.954 17:11:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:21.954 17:11:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:21.954 17:11:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:21.954 17:11:18 -- scripts/common.sh@367 -- # return 0 00:08:21.954 17:11:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.954 17:11:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:21.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.954 --rc genhtml_branch_coverage=1 00:08:21.954 --rc genhtml_function_coverage=1 00:08:21.954 --rc genhtml_legend=1 00:08:21.955 --rc geninfo_all_blocks=1 00:08:21.955 --rc geninfo_unexecuted_blocks=1 00:08:21.955 00:08:21.955 ' 00:08:21.955 17:11:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.955 --rc genhtml_branch_coverage=1 00:08:21.955 --rc genhtml_function_coverage=1 00:08:21.955 --rc genhtml_legend=1 00:08:21.955 --rc geninfo_all_blocks=1 00:08:21.955 --rc geninfo_unexecuted_blocks=1 00:08:21.955 00:08:21.955 ' 00:08:21.955 17:11:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.955 --rc genhtml_branch_coverage=1 00:08:21.955 --rc genhtml_function_coverage=1 00:08:21.955 --rc genhtml_legend=1 00:08:21.955 --rc geninfo_all_blocks=1 00:08:21.955 --rc geninfo_unexecuted_blocks=1 00:08:21.955 00:08:21.955 ' 00:08:21.955 17:11:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.955 --rc genhtml_branch_coverage=1 00:08:21.955 --rc genhtml_function_coverage=1 00:08:21.955 --rc genhtml_legend=1 00:08:21.955 --rc geninfo_all_blocks=1 00:08:21.955 --rc geninfo_unexecuted_blocks=1 00:08:21.955 00:08:21.955 ' 00:08:21.955 17:11:18 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:21.955 17:11:18 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1207668 00:08:21.955 17:11:18 -- accel/accel_rpc.sh@15 -- # waitforlisten 1207668 00:08:21.955 17:11:18 -- common/autotest_common.sh@829 -- # '[' -z 1207668 ']' 00:08:21.955 17:11:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.955 17:11:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.955 17:11:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.955 17:11:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.955 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:08:21.955 17:11:18 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:21.955 [2024-12-14 17:11:18.436952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:21.955 [2024-12-14 17:11:18.437011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207668 ] 00:08:21.955 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.955 [2024-12-14 17:11:18.506779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.955 [2024-12-14 17:11:18.544725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:21.955 [2024-12-14 17:11:18.544854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.955 17:11:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.955 17:11:18 -- common/autotest_common.sh@862 -- # return 0 00:08:21.955 17:11:18 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:21.955 17:11:18 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:21.955 17:11:18 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:21.955 17:11:18 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:21.955 17:11:18 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:21.955 17:11:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:21.955 17:11:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.955 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:08:21.955 ************************************ 00:08:21.955 START TEST accel_assign_opcode 00:08:21.955 ************************************ 00:08:21.955 17:11:18 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:08:21.955 17:11:18 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:21.955 17:11:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.955 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:08:21.955 [2024-12-14 17:11:18.581259] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:21.955 17:11:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.955 17:11:18 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:21.955 17:11:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.955 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:08:21.955 [2024-12-14 17:11:18.589271] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:21.955 17:11:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.955 17:11:18 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:21.955 17:11:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.955 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:08:22.214 17:11:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.214 17:11:18 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:22.214 17:11:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.214 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:08:22.214 17:11:18 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:22.214 17:11:18 -- accel/accel_rpc.sh@42 -- # grep software 00:08:22.214 17:11:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.214 software 00:08:22.214 00:08:22.214 real 0m0.214s 00:08:22.214 user 0m0.031s 00:08:22.214 sys 0m0.013s 00:08:22.214 17:11:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.214 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:08:22.214 ************************************ 00:08:22.214 END TEST accel_assign_opcode 00:08:22.214 ************************************ 00:08:22.214 17:11:18 -- accel/accel_rpc.sh@55 -- # killprocess 1207668 00:08:22.214 17:11:18 -- common/autotest_common.sh@936 -- # '[' -z 1207668 ']' 00:08:22.214 17:11:18 -- common/autotest_common.sh@940 -- # kill -0 1207668 00:08:22.214 17:11:18 -- common/autotest_common.sh@941 -- # uname 00:08:22.214 17:11:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:22.214 17:11:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1207668 00:08:22.473 17:11:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:22.473 17:11:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:22.473 17:11:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1207668' 00:08:22.473 killing process with pid 1207668 00:08:22.473 17:11:18 -- common/autotest_common.sh@955 -- # kill 1207668 00:08:22.473 17:11:18 -- common/autotest_common.sh@960 -- # wait 1207668 00:08:22.731 00:08:22.731 real 0m0.988s 00:08:22.731 user 0m0.843s 00:08:22.731 sys 0m0.486s 00:08:22.731 17:11:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.731 17:11:19 -- common/autotest_common.sh@10 -- # set +x 00:08:22.731 ************************************ 00:08:22.731 END TEST accel_rpc 00:08:22.731 ************************************ 00:08:22.732 17:11:19 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:22.732 17:11:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:22.732 17:11:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.732 17:11:19 -- common/autotest_common.sh@10 -- # set +x 00:08:22.732 ************************************ 00:08:22.732 START TEST app_cmdline 00:08:22.732 ************************************ 00:08:22.732 17:11:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:22.732 * Looking for test storage... 00:08:22.732 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:22.732 17:11:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:22.732 17:11:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:22.732 17:11:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:22.732 17:11:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:22.732 17:11:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:22.991 17:11:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:22.991 17:11:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:22.991 17:11:19 -- scripts/common.sh@335 -- # IFS=.-: 00:08:22.991 17:11:19 -- scripts/common.sh@335 -- # read -ra ver1 00:08:22.991 17:11:19 -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.991 17:11:19 -- scripts/common.sh@336 -- # read -ra ver2 00:08:22.991 17:11:19 -- scripts/common.sh@337 -- # local 'op=<' 00:08:22.991 17:11:19 -- scripts/common.sh@339 -- # ver1_l=2 00:08:22.991 17:11:19 -- scripts/common.sh@340 -- # ver2_l=1 00:08:22.991 17:11:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:22.991 17:11:19 -- scripts/common.sh@343 -- # case "$op" in 00:08:22.991 17:11:19 -- scripts/common.sh@344 -- # : 1 00:08:22.991 17:11:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:22.991 17:11:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.991 17:11:19 -- scripts/common.sh@364 -- # decimal 1 00:08:22.991 17:11:19 -- scripts/common.sh@352 -- # local d=1 00:08:22.991 17:11:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.991 17:11:19 -- scripts/common.sh@354 -- # echo 1 00:08:22.991 17:11:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:22.991 17:11:19 -- scripts/common.sh@365 -- # decimal 2 00:08:22.991 17:11:19 -- scripts/common.sh@352 -- # local d=2 00:08:22.991 17:11:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.991 17:11:19 -- scripts/common.sh@354 -- # echo 2 00:08:22.991 17:11:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:22.991 17:11:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:22.991 17:11:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:22.991 17:11:19 -- scripts/common.sh@367 -- # return 0 00:08:22.991 17:11:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.991 17:11:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:22.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.991 --rc genhtml_branch_coverage=1 00:08:22.991 --rc genhtml_function_coverage=1 00:08:22.991 --rc genhtml_legend=1 00:08:22.991 --rc geninfo_all_blocks=1 00:08:22.991 --rc geninfo_unexecuted_blocks=1 00:08:22.991 00:08:22.991 ' 00:08:22.991 17:11:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:22.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.991 --rc genhtml_branch_coverage=1 00:08:22.991 --rc genhtml_function_coverage=1 00:08:22.991 --rc genhtml_legend=1 00:08:22.991 --rc geninfo_all_blocks=1 00:08:22.991 --rc geninfo_unexecuted_blocks=1 00:08:22.991 00:08:22.991 ' 00:08:22.991 17:11:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:22.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.991 --rc genhtml_branch_coverage=1 00:08:22.991 --rc genhtml_function_coverage=1 00:08:22.991 --rc genhtml_legend=1 00:08:22.991 --rc geninfo_all_blocks=1 00:08:22.991 --rc geninfo_unexecuted_blocks=1 00:08:22.991 00:08:22.991 ' 00:08:22.991 17:11:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:22.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.991 --rc genhtml_branch_coverage=1 00:08:22.991 --rc genhtml_function_coverage=1 00:08:22.991 --rc genhtml_legend=1 00:08:22.991 --rc geninfo_all_blocks=1 00:08:22.991 --rc geninfo_unexecuted_blocks=1 00:08:22.991 00:08:22.991 ' 00:08:22.991 17:11:19 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:22.991 17:11:19 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1207962 00:08:22.991 17:11:19 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:22.991 17:11:19 -- app/cmdline.sh@18 -- # waitforlisten 1207962 00:08:22.991 17:11:19 -- common/autotest_common.sh@829 -- # '[' -z 1207962 ']' 00:08:22.991 17:11:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.991 17:11:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.992 17:11:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.992 17:11:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.992 17:11:19 -- common/autotest_common.sh@10 -- # set +x 00:08:22.992 [2024-12-14 17:11:19.479983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:22.992 [2024-12-14 17:11:19.480034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207962 ] 00:08:22.992 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.992 [2024-12-14 17:11:19.548504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.992 [2024-12-14 17:11:19.584018] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:22.992 [2024-12-14 17:11:19.584155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.927 17:11:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.927 17:11:20 -- common/autotest_common.sh@862 -- # return 0 00:08:23.927 17:11:20 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:23.927 { 00:08:23.927 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:08:23.927 "fields": { 00:08:23.927 "major": 24, 00:08:23.927 "minor": 1, 00:08:23.927 "patch": 1, 00:08:23.927 "suffix": "-pre", 00:08:23.927 "commit": "c13c99a5e" 00:08:23.927 } 00:08:23.927 } 00:08:23.927 17:11:20 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:23.927 17:11:20 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:23.927 17:11:20 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:23.927 17:11:20 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:23.927 17:11:20 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:23.927 17:11:20 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:23.927 17:11:20 -- app/cmdline.sh@26 -- # sort 00:08:23.927 17:11:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.927 17:11:20 -- common/autotest_common.sh@10 -- # set +x 00:08:23.927 17:11:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.927 17:11:20 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:23.927 17:11:20 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:23.927 17:11:20 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:23.927 17:11:20 -- common/autotest_common.sh@650 -- # local es=0 00:08:23.927 17:11:20 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:23.927 17:11:20 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.927 17:11:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.927 17:11:20 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.927 17:11:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.927 17:11:20 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.927 17:11:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.927 17:11:20 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.927 17:11:20 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:23.927 17:11:20 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:24.186 request: 00:08:24.186 { 00:08:24.186 "method": "env_dpdk_get_mem_stats", 00:08:24.186 "req_id": 1 00:08:24.186 } 00:08:24.186 Got JSON-RPC error response 00:08:24.186 response: 00:08:24.186 { 00:08:24.186 "code": -32601, 00:08:24.186 "message": "Method not found" 00:08:24.186 } 00:08:24.186 17:11:20 -- common/autotest_common.sh@653 -- # es=1 00:08:24.186 17:11:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.186 17:11:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.186 17:11:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.186 17:11:20 -- app/cmdline.sh@1 -- # killprocess 1207962 00:08:24.186 17:11:20 -- common/autotest_common.sh@936 -- # '[' -z 1207962 ']' 00:08:24.186 17:11:20 -- common/autotest_common.sh@940 -- # kill -0 1207962 00:08:24.186 17:11:20 -- common/autotest_common.sh@941 -- # uname 00:08:24.186 17:11:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:24.186 17:11:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1207962 00:08:24.186 17:11:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:24.186 17:11:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:24.186 17:11:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1207962' 00:08:24.186 killing process with pid 1207962 00:08:24.186 17:11:20 -- common/autotest_common.sh@955 -- # kill 1207962 00:08:24.186 17:11:20 -- common/autotest_common.sh@960 -- # wait 1207962 00:08:24.446 00:08:24.446 real 0m1.807s 00:08:24.446 user 0m2.085s 00:08:24.446 sys 0m0.526s 00:08:24.446 17:11:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.446 17:11:21 -- common/autotest_common.sh@10 -- # set +x 00:08:24.446 ************************************ 00:08:24.446 END TEST app_cmdline 00:08:24.446 ************************************ 00:08:24.446 17:11:21 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:24.446 17:11:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:24.446 17:11:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.446 17:11:21 -- common/autotest_common.sh@10 -- # set +x 00:08:24.446 ************************************ 00:08:24.446 START TEST version 00:08:24.446 ************************************ 00:08:24.446 17:11:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:24.705 * Looking for test storage... 00:08:24.705 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:24.705 17:11:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:24.705 17:11:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:24.705 17:11:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:24.705 17:11:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:24.705 17:11:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:24.705 17:11:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:24.705 17:11:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:24.705 17:11:21 -- scripts/common.sh@335 -- # IFS=.-: 00:08:24.705 17:11:21 -- scripts/common.sh@335 -- # read -ra ver1 00:08:24.705 17:11:21 -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.705 17:11:21 -- scripts/common.sh@336 -- # read -ra ver2 00:08:24.705 17:11:21 -- scripts/common.sh@337 -- # local 'op=<' 00:08:24.705 17:11:21 -- scripts/common.sh@339 -- # ver1_l=2 00:08:24.705 17:11:21 -- scripts/common.sh@340 -- # ver2_l=1 00:08:24.705 17:11:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:24.705 17:11:21 -- scripts/common.sh@343 -- # case "$op" in 00:08:24.705 17:11:21 -- scripts/common.sh@344 -- # : 1 00:08:24.705 17:11:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:24.705 17:11:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.705 17:11:21 -- scripts/common.sh@364 -- # decimal 1 00:08:24.705 17:11:21 -- scripts/common.sh@352 -- # local d=1 00:08:24.705 17:11:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.705 17:11:21 -- scripts/common.sh@354 -- # echo 1 00:08:24.705 17:11:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:24.705 17:11:21 -- scripts/common.sh@365 -- # decimal 2 00:08:24.705 17:11:21 -- scripts/common.sh@352 -- # local d=2 00:08:24.705 17:11:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.705 17:11:21 -- scripts/common.sh@354 -- # echo 2 00:08:24.705 17:11:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:24.705 17:11:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:24.705 17:11:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:24.705 17:11:21 -- scripts/common.sh@367 -- # return 0 00:08:24.705 17:11:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.705 17:11:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:24.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.706 --rc genhtml_branch_coverage=1 00:08:24.706 --rc genhtml_function_coverage=1 00:08:24.706 --rc genhtml_legend=1 00:08:24.706 --rc geninfo_all_blocks=1 00:08:24.706 --rc geninfo_unexecuted_blocks=1 00:08:24.706 00:08:24.706 ' 00:08:24.706 17:11:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:24.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.706 --rc genhtml_branch_coverage=1 00:08:24.706 --rc genhtml_function_coverage=1 00:08:24.706 --rc genhtml_legend=1 00:08:24.706 --rc geninfo_all_blocks=1 00:08:24.706 --rc geninfo_unexecuted_blocks=1 00:08:24.706 00:08:24.706 ' 00:08:24.706 17:11:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:24.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.706 --rc genhtml_branch_coverage=1 00:08:24.706 --rc genhtml_function_coverage=1 00:08:24.706 --rc genhtml_legend=1 00:08:24.706 --rc geninfo_all_blocks=1 00:08:24.706 --rc geninfo_unexecuted_blocks=1 00:08:24.706 00:08:24.706 ' 00:08:24.706 17:11:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:24.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.706 --rc genhtml_branch_coverage=1 00:08:24.706 --rc genhtml_function_coverage=1 00:08:24.706 --rc genhtml_legend=1 00:08:24.706 --rc geninfo_all_blocks=1 00:08:24.706 --rc geninfo_unexecuted_blocks=1 00:08:24.706 00:08:24.706 ' 00:08:24.706 17:11:21 -- app/version.sh@17 -- # get_header_version major 00:08:24.706 17:11:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:24.706 17:11:21 -- app/version.sh@14 -- # cut -f2 00:08:24.706 17:11:21 -- app/version.sh@14 -- # tr -d '"' 00:08:24.706 17:11:21 -- app/version.sh@17 -- # major=24 00:08:24.706 17:11:21 -- app/version.sh@18 -- # get_header_version minor 00:08:24.706 17:11:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:24.706 17:11:21 -- app/version.sh@14 -- # cut -f2 00:08:24.706 17:11:21 -- app/version.sh@14 -- # tr -d '"' 00:08:24.706 17:11:21 -- app/version.sh@18 -- # minor=1 00:08:24.706 17:11:21 -- app/version.sh@19 -- # get_header_version patch 00:08:24.706 17:11:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:24.706 17:11:21 -- app/version.sh@14 -- # cut -f2 00:08:24.706 17:11:21 -- app/version.sh@14 -- # tr -d '"' 00:08:24.706 17:11:21 -- app/version.sh@19 -- # patch=1 00:08:24.706 17:11:21 -- app/version.sh@20 -- # get_header_version suffix 00:08:24.706 17:11:21 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:24.706 17:11:21 -- app/version.sh@14 -- # cut -f2 00:08:24.706 17:11:21 -- app/version.sh@14 -- # tr -d '"' 00:08:24.706 17:11:21 -- app/version.sh@20 -- # suffix=-pre 00:08:24.706 17:11:21 -- app/version.sh@22 -- # version=24.1 00:08:24.706 17:11:21 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:24.706 17:11:21 -- app/version.sh@25 -- # version=24.1.1 00:08:24.706 17:11:21 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:24.706 17:11:21 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:24.706 17:11:21 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:24.706 17:11:21 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:24.706 17:11:21 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:24.706 00:08:24.706 real 0m0.259s 00:08:24.706 user 0m0.148s 00:08:24.706 sys 0m0.155s 00:08:24.706 17:11:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.706 17:11:21 -- common/autotest_common.sh@10 -- # set +x 00:08:24.706 ************************************ 00:08:24.706 END TEST version 00:08:24.706 ************************************ 00:08:24.966 17:11:21 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:08:24.966 17:11:21 -- spdk/autotest.sh@191 -- # uname -s 00:08:24.966 17:11:21 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:08:24.966 17:11:21 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:24.966 17:11:21 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:24.966 17:11:21 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:08:24.966 17:11:21 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:08:24.966 17:11:21 -- spdk/autotest.sh@255 -- # timing_exit lib 00:08:24.966 17:11:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:24.966 17:11:21 -- common/autotest_common.sh@10 -- # set +x 00:08:24.966 17:11:21 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:08:24.966 17:11:21 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:24.966 17:11:21 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:08:24.966 17:11:21 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:08:24.966 17:11:21 -- spdk/autotest.sh@278 -- # '[' rdma = rdma ']' 00:08:24.966 17:11:21 -- spdk/autotest.sh@279 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:24.966 17:11:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:24.966 17:11:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.966 17:11:21 -- common/autotest_common.sh@10 -- # set +x 00:08:24.966 ************************************ 00:08:24.966 START TEST nvmf_rdma 00:08:24.966 ************************************ 00:08:24.966 17:11:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:24.966 * Looking for test storage... 00:08:24.966 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:24.966 17:11:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:24.966 17:11:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:24.966 17:11:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:24.966 17:11:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:24.966 17:11:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:24.966 17:11:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:24.966 17:11:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:24.966 17:11:21 -- scripts/common.sh@335 -- # IFS=.-: 00:08:24.966 17:11:21 -- scripts/common.sh@335 -- # read -ra ver1 00:08:24.966 17:11:21 -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.966 17:11:21 -- scripts/common.sh@336 -- # read -ra ver2 00:08:24.966 17:11:21 -- scripts/common.sh@337 -- # local 'op=<' 00:08:24.966 17:11:21 -- scripts/common.sh@339 -- # ver1_l=2 00:08:24.966 17:11:21 -- scripts/common.sh@340 -- # ver2_l=1 00:08:24.966 17:11:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:24.966 17:11:21 -- scripts/common.sh@343 -- # case "$op" in 00:08:24.966 17:11:21 -- scripts/common.sh@344 -- # : 1 00:08:24.966 17:11:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:24.966 17:11:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.966 17:11:21 -- scripts/common.sh@364 -- # decimal 1 00:08:24.966 17:11:21 -- scripts/common.sh@352 -- # local d=1 00:08:24.966 17:11:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.966 17:11:21 -- scripts/common.sh@354 -- # echo 1 00:08:24.966 17:11:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:24.966 17:11:21 -- scripts/common.sh@365 -- # decimal 2 00:08:24.966 17:11:21 -- scripts/common.sh@352 -- # local d=2 00:08:24.966 17:11:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.966 17:11:21 -- scripts/common.sh@354 -- # echo 2 00:08:24.966 17:11:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:24.966 17:11:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:24.966 17:11:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:24.966 17:11:21 -- scripts/common.sh@367 -- # return 0 00:08:24.966 17:11:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.966 17:11:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:24.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.966 --rc genhtml_branch_coverage=1 00:08:24.966 --rc genhtml_function_coverage=1 00:08:24.966 --rc genhtml_legend=1 00:08:24.966 --rc geninfo_all_blocks=1 00:08:24.966 --rc geninfo_unexecuted_blocks=1 00:08:24.966 00:08:24.966 ' 00:08:24.966 17:11:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:24.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.966 --rc genhtml_branch_coverage=1 00:08:24.966 --rc genhtml_function_coverage=1 00:08:24.966 --rc genhtml_legend=1 00:08:24.966 --rc geninfo_all_blocks=1 00:08:24.966 --rc geninfo_unexecuted_blocks=1 00:08:24.966 00:08:24.966 ' 00:08:24.966 17:11:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:24.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.966 --rc genhtml_branch_coverage=1 00:08:24.966 --rc genhtml_function_coverage=1 00:08:24.966 --rc genhtml_legend=1 00:08:24.966 --rc geninfo_all_blocks=1 00:08:24.966 --rc geninfo_unexecuted_blocks=1 00:08:24.966 00:08:24.966 ' 00:08:24.966 17:11:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:24.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.966 --rc genhtml_branch_coverage=1 00:08:24.966 --rc genhtml_function_coverage=1 00:08:24.966 --rc genhtml_legend=1 00:08:24.966 --rc geninfo_all_blocks=1 00:08:24.966 --rc geninfo_unexecuted_blocks=1 00:08:24.966 00:08:24.966 ' 00:08:24.966 17:11:21 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:24.966 17:11:21 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:24.966 17:11:21 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.966 17:11:21 -- nvmf/common.sh@7 -- # uname -s 00:08:24.966 17:11:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.966 17:11:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.966 17:11:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.966 17:11:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.966 17:11:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.966 17:11:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.966 17:11:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.966 17:11:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.966 17:11:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.966 17:11:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.966 17:11:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:24.966 17:11:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:24.966 17:11:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.226 17:11:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.226 17:11:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.226 17:11:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:25.226 17:11:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.226 17:11:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.226 17:11:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.226 17:11:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.226 17:11:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.226 17:11:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.226 17:11:21 -- paths/export.sh@5 -- # export PATH 00:08:25.227 17:11:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.227 17:11:21 -- nvmf/common.sh@46 -- # : 0 00:08:25.227 17:11:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:25.227 17:11:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:25.227 17:11:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:25.227 17:11:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.227 17:11:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.227 17:11:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:25.227 17:11:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:25.227 17:11:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:25.227 17:11:21 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:25.227 17:11:21 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:25.227 17:11:21 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:25.227 17:11:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.227 17:11:21 -- common/autotest_common.sh@10 -- # set +x 00:08:25.227 17:11:21 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:25.227 17:11:21 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:25.227 17:11:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:25.227 17:11:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.227 17:11:21 -- common/autotest_common.sh@10 -- # set +x 00:08:25.227 ************************************ 00:08:25.227 START TEST nvmf_example 00:08:25.227 ************************************ 00:08:25.227 17:11:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:25.227 * Looking for test storage... 00:08:25.227 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:25.227 17:11:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:25.227 17:11:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:25.227 17:11:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:25.227 17:11:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:25.227 17:11:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:25.227 17:11:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:25.227 17:11:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:25.227 17:11:21 -- scripts/common.sh@335 -- # IFS=.-: 00:08:25.227 17:11:21 -- scripts/common.sh@335 -- # read -ra ver1 00:08:25.227 17:11:21 -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.227 17:11:21 -- scripts/common.sh@336 -- # read -ra ver2 00:08:25.227 17:11:21 -- scripts/common.sh@337 -- # local 'op=<' 00:08:25.227 17:11:21 -- scripts/common.sh@339 -- # ver1_l=2 00:08:25.227 17:11:21 -- scripts/common.sh@340 -- # ver2_l=1 00:08:25.227 17:11:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:25.227 17:11:21 -- scripts/common.sh@343 -- # case "$op" in 00:08:25.227 17:11:21 -- scripts/common.sh@344 -- # : 1 00:08:25.227 17:11:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:25.227 17:11:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.227 17:11:21 -- scripts/common.sh@364 -- # decimal 1 00:08:25.227 17:11:21 -- scripts/common.sh@352 -- # local d=1 00:08:25.227 17:11:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.227 17:11:21 -- scripts/common.sh@354 -- # echo 1 00:08:25.227 17:11:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:25.227 17:11:21 -- scripts/common.sh@365 -- # decimal 2 00:08:25.227 17:11:21 -- scripts/common.sh@352 -- # local d=2 00:08:25.227 17:11:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.227 17:11:21 -- scripts/common.sh@354 -- # echo 2 00:08:25.227 17:11:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:25.227 17:11:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:25.227 17:11:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:25.227 17:11:21 -- scripts/common.sh@367 -- # return 0 00:08:25.227 17:11:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.227 17:11:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:25.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.227 --rc genhtml_branch_coverage=1 00:08:25.227 --rc genhtml_function_coverage=1 00:08:25.227 --rc genhtml_legend=1 00:08:25.227 --rc geninfo_all_blocks=1 00:08:25.227 --rc geninfo_unexecuted_blocks=1 00:08:25.227 00:08:25.227 ' 00:08:25.227 17:11:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:25.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.227 --rc genhtml_branch_coverage=1 00:08:25.227 --rc genhtml_function_coverage=1 00:08:25.227 --rc genhtml_legend=1 00:08:25.227 --rc geninfo_all_blocks=1 00:08:25.227 --rc geninfo_unexecuted_blocks=1 00:08:25.227 00:08:25.227 ' 00:08:25.227 17:11:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:25.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.227 --rc genhtml_branch_coverage=1 00:08:25.227 --rc genhtml_function_coverage=1 00:08:25.227 --rc genhtml_legend=1 00:08:25.227 --rc geninfo_all_blocks=1 00:08:25.227 --rc geninfo_unexecuted_blocks=1 00:08:25.227 00:08:25.227 ' 00:08:25.227 17:11:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:25.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.227 --rc genhtml_branch_coverage=1 00:08:25.227 --rc genhtml_function_coverage=1 00:08:25.227 --rc genhtml_legend=1 00:08:25.227 --rc geninfo_all_blocks=1 00:08:25.227 --rc geninfo_unexecuted_blocks=1 00:08:25.227 00:08:25.227 ' 00:08:25.227 17:11:21 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.227 17:11:21 -- nvmf/common.sh@7 -- # uname -s 00:08:25.227 17:11:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.227 17:11:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.227 17:11:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.227 17:11:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.227 17:11:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.227 17:11:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.227 17:11:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.227 17:11:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.227 17:11:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.227 17:11:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.227 17:11:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:25.227 17:11:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:25.227 17:11:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.227 17:11:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.227 17:11:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.227 17:11:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:25.227 17:11:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.227 17:11:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.227 17:11:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.227 17:11:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.227 17:11:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.227 17:11:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.227 17:11:21 -- paths/export.sh@5 -- # export PATH 00:08:25.227 17:11:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.227 17:11:21 -- nvmf/common.sh@46 -- # : 0 00:08:25.227 17:11:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:25.227 17:11:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:25.227 17:11:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:25.227 17:11:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.227 17:11:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.227 17:11:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:25.227 17:11:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:25.227 17:11:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:25.227 17:11:21 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:25.227 17:11:21 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:25.227 17:11:21 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:25.227 17:11:21 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:25.227 17:11:21 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:25.227 17:11:21 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:25.227 17:11:21 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:25.227 17:11:21 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:25.227 17:11:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.227 17:11:21 -- common/autotest_common.sh@10 -- # set +x 00:08:25.227 17:11:21 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:25.227 17:11:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:25.227 17:11:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.227 17:11:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:25.227 17:11:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:25.227 17:11:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:25.227 17:11:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.227 17:11:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.227 17:11:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.227 17:11:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:25.228 17:11:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:25.228 17:11:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:25.228 17:11:21 -- common/autotest_common.sh@10 -- # set +x 00:08:31.795 17:11:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:31.795 17:11:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:32.055 17:11:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:32.055 17:11:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:32.055 17:11:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:32.055 17:11:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:32.055 17:11:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:32.055 17:11:28 -- nvmf/common.sh@294 -- # net_devs=() 00:08:32.055 17:11:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:32.055 17:11:28 -- nvmf/common.sh@295 -- # e810=() 00:08:32.055 17:11:28 -- nvmf/common.sh@295 -- # local -ga e810 00:08:32.055 17:11:28 -- nvmf/common.sh@296 -- # x722=() 00:08:32.055 17:11:28 -- nvmf/common.sh@296 -- # local -ga x722 00:08:32.055 17:11:28 -- nvmf/common.sh@297 -- # mlx=() 00:08:32.055 17:11:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:32.055 17:11:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.055 17:11:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.055 17:11:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.055 17:11:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.055 17:11:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.055 17:11:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.055 17:11:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.055 17:11:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.055 17:11:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.055 17:11:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.055 17:11:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.055 17:11:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:32.055 17:11:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:32.055 17:11:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:32.055 17:11:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:32.055 17:11:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:32.055 17:11:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:32.055 17:11:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:32.055 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:32.055 17:11:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:32.055 17:11:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:32.055 17:11:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:32.055 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:32.055 17:11:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:32.055 17:11:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:32.055 17:11:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:32.055 17:11:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.055 17:11:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:32.055 17:11:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.055 17:11:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:32.055 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:32.055 17:11:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.055 17:11:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:32.055 17:11:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.055 17:11:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:32.055 17:11:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.055 17:11:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:32.055 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:32.055 17:11:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.055 17:11:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:32.055 17:11:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:32.055 17:11:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:32.055 17:11:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:32.055 17:11:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:32.056 17:11:28 -- nvmf/common.sh@57 -- # uname 00:08:32.056 17:11:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:32.056 17:11:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:32.056 17:11:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:32.056 17:11:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:32.056 17:11:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:32.056 17:11:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:32.056 17:11:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:32.056 17:11:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:32.056 17:11:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:32.056 17:11:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:32.056 17:11:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:32.056 17:11:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:32.056 17:11:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:32.056 17:11:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:32.056 17:11:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:32.056 17:11:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:32.056 17:11:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:32.056 17:11:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.056 17:11:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:32.056 17:11:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:32.056 17:11:28 -- nvmf/common.sh@104 -- # continue 2 00:08:32.056 17:11:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:32.056 17:11:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.056 17:11:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:32.056 17:11:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.056 17:11:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:32.056 17:11:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:32.056 17:11:28 -- nvmf/common.sh@104 -- # continue 2 00:08:32.056 17:11:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:32.056 17:11:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:32.056 17:11:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:32.056 17:11:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:32.056 17:11:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:32.056 17:11:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:32.056 17:11:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:32.056 17:11:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:32.056 17:11:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:32.056 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:32.056 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:32.056 altname enp217s0f0np0 00:08:32.056 altname ens818f0np0 00:08:32.056 inet 192.168.100.8/24 scope global mlx_0_0 00:08:32.056 valid_lft forever preferred_lft forever 00:08:32.056 17:11:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:32.056 17:11:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:32.056 17:11:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:32.056 17:11:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:32.056 17:11:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:32.056 17:11:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:32.056 17:11:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:32.056 17:11:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:32.056 17:11:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:32.056 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:32.056 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:32.056 altname enp217s0f1np1 00:08:32.056 altname ens818f1np1 00:08:32.056 inet 192.168.100.9/24 scope global mlx_0_1 00:08:32.056 valid_lft forever preferred_lft forever 00:08:32.056 17:11:28 -- nvmf/common.sh@410 -- # return 0 00:08:32.056 17:11:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:32.056 17:11:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:32.056 17:11:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:32.056 17:11:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:32.056 17:11:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:32.056 17:11:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:32.056 17:11:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:32.056 17:11:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:32.056 17:11:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:32.056 17:11:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:32.056 17:11:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:32.056 17:11:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.056 17:11:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:32.056 17:11:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:32.056 17:11:28 -- nvmf/common.sh@104 -- # continue 2 00:08:32.056 17:11:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:32.056 17:11:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.056 17:11:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:32.056 17:11:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.056 17:11:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:32.056 17:11:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:32.056 17:11:28 -- nvmf/common.sh@104 -- # continue 2 00:08:32.056 17:11:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:32.056 17:11:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:32.056 17:11:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:32.056 17:11:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:32.056 17:11:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:32.056 17:11:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:32.056 17:11:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:32.056 17:11:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:32.056 17:11:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:32.056 17:11:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:32.056 17:11:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:32.056 17:11:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:32.056 17:11:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:32.056 192.168.100.9' 00:08:32.056 17:11:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:32.056 192.168.100.9' 00:08:32.056 17:11:28 -- nvmf/common.sh@445 -- # head -n 1 00:08:32.056 17:11:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:32.056 17:11:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:32.056 192.168.100.9' 00:08:32.056 17:11:28 -- nvmf/common.sh@446 -- # tail -n +2 00:08:32.056 17:11:28 -- nvmf/common.sh@446 -- # head -n 1 00:08:32.056 17:11:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:32.056 17:11:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:32.056 17:11:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:32.056 17:11:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:32.056 17:11:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:32.056 17:11:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:32.056 17:11:28 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:32.056 17:11:28 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:32.056 17:11:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.056 17:11:28 -- common/autotest_common.sh@10 -- # set +x 00:08:32.056 17:11:28 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:32.056 17:11:28 -- target/nvmf_example.sh@34 -- # nvmfpid=1211777 00:08:32.056 17:11:28 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:32.056 17:11:28 -- target/nvmf_example.sh@36 -- # waitforlisten 1211777 00:08:32.056 17:11:28 -- common/autotest_common.sh@829 -- # '[' -z 1211777 ']' 00:08:32.056 17:11:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.056 17:11:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.056 17:11:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.056 17:11:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.056 17:11:28 -- common/autotest_common.sh@10 -- # set +x 00:08:32.056 17:11:28 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:32.315 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.882 17:11:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.882 17:11:29 -- common/autotest_common.sh@862 -- # return 0 00:08:32.882 17:11:29 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:32.882 17:11:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.882 17:11:29 -- common/autotest_common.sh@10 -- # set +x 00:08:33.140 17:11:29 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:33.140 17:11:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.140 17:11:29 -- common/autotest_common.sh@10 -- # set +x 00:08:33.140 17:11:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.140 17:11:29 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:33.140 17:11:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.140 17:11:29 -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 17:11:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.399 17:11:29 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:33.399 17:11:29 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:33.399 17:11:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.399 17:11:29 -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 17:11:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.399 17:11:29 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:33.399 17:11:29 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.399 17:11:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.399 17:11:29 -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 17:11:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.399 17:11:29 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:33.399 17:11:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.399 17:11:29 -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 17:11:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.399 17:11:29 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:33.399 17:11:29 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:33.399 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.604 Initializing NVMe Controllers 00:08:45.604 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:45.604 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:45.604 Initialization complete. Launching workers. 00:08:45.604 ======================================================== 00:08:45.604 Latency(us) 00:08:45.604 Device Information : IOPS MiB/s Average min max 00:08:45.604 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25961.99 101.41 2465.03 593.81 14065.49 00:08:45.604 ======================================================== 00:08:45.604 Total : 25961.99 101.41 2465.03 593.81 14065.49 00:08:45.604 00:08:45.604 17:11:41 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:45.604 17:11:41 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:45.604 17:11:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:45.604 17:11:41 -- nvmf/common.sh@116 -- # sync 00:08:45.604 17:11:41 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:45.604 17:11:41 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:45.604 17:11:41 -- nvmf/common.sh@119 -- # set +e 00:08:45.604 17:11:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:45.604 17:11:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:45.604 rmmod nvme_rdma 00:08:45.604 rmmod nvme_fabrics 00:08:45.604 17:11:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:45.604 17:11:41 -- nvmf/common.sh@123 -- # set -e 00:08:45.604 17:11:41 -- nvmf/common.sh@124 -- # return 0 00:08:45.604 17:11:41 -- nvmf/common.sh@477 -- # '[' -n 1211777 ']' 00:08:45.604 17:11:41 -- nvmf/common.sh@478 -- # killprocess 1211777 00:08:45.604 17:11:41 -- common/autotest_common.sh@936 -- # '[' -z 1211777 ']' 00:08:45.604 17:11:41 -- common/autotest_common.sh@940 -- # kill -0 1211777 00:08:45.604 17:11:41 -- common/autotest_common.sh@941 -- # uname 00:08:45.604 17:11:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.604 17:11:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1211777 00:08:45.604 17:11:41 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:45.604 17:11:41 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:45.604 17:11:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1211777' 00:08:45.604 killing process with pid 1211777 00:08:45.604 17:11:41 -- common/autotest_common.sh@955 -- # kill 1211777 00:08:45.604 17:11:41 -- common/autotest_common.sh@960 -- # wait 1211777 00:08:45.604 nvmf threads initialize successfully 00:08:45.604 bdev subsystem init successfully 00:08:45.604 created a nvmf target service 00:08:45.604 create targets's poll groups done 00:08:45.604 all subsystems of target started 00:08:45.604 nvmf target is running 00:08:45.604 all subsystems of target stopped 00:08:45.604 destroy targets's poll groups done 00:08:45.604 destroyed the nvmf target service 00:08:45.604 bdev subsystem finish successfully 00:08:45.604 nvmf threads destroy successfully 00:08:45.604 17:11:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:45.604 17:11:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:45.604 17:11:41 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:45.604 17:11:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:45.604 17:11:41 -- common/autotest_common.sh@10 -- # set +x 00:08:45.604 00:08:45.604 real 0m19.808s 00:08:45.604 user 0m52.322s 00:08:45.604 sys 0m5.739s 00:08:45.604 17:11:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.604 17:11:41 -- common/autotest_common.sh@10 -- # set +x 00:08:45.604 ************************************ 00:08:45.604 END TEST nvmf_example 00:08:45.604 ************************************ 00:08:45.604 17:11:41 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:45.604 17:11:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:45.604 17:11:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.604 17:11:41 -- common/autotest_common.sh@10 -- # set +x 00:08:45.604 ************************************ 00:08:45.604 START TEST nvmf_filesystem 00:08:45.604 ************************************ 00:08:45.604 17:11:41 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:45.604 * Looking for test storage... 00:08:45.604 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:45.604 17:11:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:45.604 17:11:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:45.604 17:11:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:45.604 17:11:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:45.604 17:11:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:45.604 17:11:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:45.604 17:11:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:45.604 17:11:41 -- scripts/common.sh@335 -- # IFS=.-: 00:08:45.604 17:11:41 -- scripts/common.sh@335 -- # read -ra ver1 00:08:45.604 17:11:41 -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.604 17:11:41 -- scripts/common.sh@336 -- # read -ra ver2 00:08:45.604 17:11:41 -- scripts/common.sh@337 -- # local 'op=<' 00:08:45.604 17:11:41 -- scripts/common.sh@339 -- # ver1_l=2 00:08:45.604 17:11:41 -- scripts/common.sh@340 -- # ver2_l=1 00:08:45.604 17:11:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:45.604 17:11:41 -- scripts/common.sh@343 -- # case "$op" in 00:08:45.604 17:11:41 -- scripts/common.sh@344 -- # : 1 00:08:45.604 17:11:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:45.604 17:11:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.604 17:11:41 -- scripts/common.sh@364 -- # decimal 1 00:08:45.604 17:11:41 -- scripts/common.sh@352 -- # local d=1 00:08:45.604 17:11:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.604 17:11:41 -- scripts/common.sh@354 -- # echo 1 00:08:45.604 17:11:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:45.604 17:11:41 -- scripts/common.sh@365 -- # decimal 2 00:08:45.604 17:11:41 -- scripts/common.sh@352 -- # local d=2 00:08:45.604 17:11:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.604 17:11:41 -- scripts/common.sh@354 -- # echo 2 00:08:45.605 17:11:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:45.605 17:11:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:45.605 17:11:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:45.605 17:11:41 -- scripts/common.sh@367 -- # return 0 00:08:45.605 17:11:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.605 17:11:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:45.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.605 --rc genhtml_branch_coverage=1 00:08:45.605 --rc genhtml_function_coverage=1 00:08:45.605 --rc genhtml_legend=1 00:08:45.605 --rc geninfo_all_blocks=1 00:08:45.605 --rc geninfo_unexecuted_blocks=1 00:08:45.605 00:08:45.605 ' 00:08:45.605 17:11:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:45.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.605 --rc genhtml_branch_coverage=1 00:08:45.605 --rc genhtml_function_coverage=1 00:08:45.605 --rc genhtml_legend=1 00:08:45.605 --rc geninfo_all_blocks=1 00:08:45.605 --rc geninfo_unexecuted_blocks=1 00:08:45.605 00:08:45.605 ' 00:08:45.605 17:11:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:45.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.605 --rc genhtml_branch_coverage=1 00:08:45.605 --rc genhtml_function_coverage=1 00:08:45.605 --rc genhtml_legend=1 00:08:45.605 --rc geninfo_all_blocks=1 00:08:45.605 --rc geninfo_unexecuted_blocks=1 00:08:45.605 00:08:45.605 ' 00:08:45.605 17:11:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:45.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.605 --rc genhtml_branch_coverage=1 00:08:45.605 --rc genhtml_function_coverage=1 00:08:45.605 --rc genhtml_legend=1 00:08:45.605 --rc geninfo_all_blocks=1 00:08:45.605 --rc geninfo_unexecuted_blocks=1 00:08:45.605 00:08:45.605 ' 00:08:45.605 17:11:41 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:45.605 17:11:41 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:45.605 17:11:41 -- common/autotest_common.sh@34 -- # set -e 00:08:45.605 17:11:41 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:45.605 17:11:41 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:45.605 17:11:41 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:45.605 17:11:41 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:45.605 17:11:41 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:45.605 17:11:41 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:45.605 17:11:41 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:45.605 17:11:41 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:45.605 17:11:41 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:45.605 17:11:41 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:45.605 17:11:41 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:45.605 17:11:41 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:45.605 17:11:41 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:45.605 17:11:41 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:45.605 17:11:41 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:45.605 17:11:41 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:45.605 17:11:41 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:45.605 17:11:41 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:45.605 17:11:41 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:45.605 17:11:41 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:45.605 17:11:41 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:45.605 17:11:41 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:45.605 17:11:41 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:45.605 17:11:41 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:45.605 17:11:41 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:45.605 17:11:41 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:45.605 17:11:41 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:45.605 17:11:41 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:45.605 17:11:41 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:45.605 17:11:41 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:45.605 17:11:41 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:45.605 17:11:41 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:45.605 17:11:41 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:45.605 17:11:41 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:45.605 17:11:41 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:45.605 17:11:41 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:45.605 17:11:41 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:45.605 17:11:41 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:45.605 17:11:41 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:45.605 17:11:41 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:45.605 17:11:41 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:45.605 17:11:41 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:45.605 17:11:41 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:45.605 17:11:41 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:45.605 17:11:41 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:45.605 17:11:41 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:45.605 17:11:41 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:45.605 17:11:41 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:45.605 17:11:41 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:45.605 17:11:41 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:45.605 17:11:41 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:45.605 17:11:41 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:45.605 17:11:41 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:45.605 17:11:41 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:45.605 17:11:41 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:45.605 17:11:41 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:45.605 17:11:41 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:45.605 17:11:41 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:45.605 17:11:41 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:45.605 17:11:41 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:45.605 17:11:41 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:45.605 17:11:41 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:45.605 17:11:41 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:45.605 17:11:41 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:45.605 17:11:41 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:45.605 17:11:41 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:45.605 17:11:41 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:45.605 17:11:41 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:45.605 17:11:41 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:45.605 17:11:41 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:45.605 17:11:41 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:45.605 17:11:41 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:45.605 17:11:41 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:45.605 17:11:41 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:45.605 17:11:41 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:45.605 17:11:41 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:45.605 17:11:41 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:45.605 17:11:41 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:45.605 17:11:41 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:45.605 17:11:41 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:45.605 17:11:41 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:45.605 17:11:41 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:45.605 17:11:41 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:45.605 17:11:41 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:45.605 17:11:41 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:45.605 17:11:41 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:45.605 17:11:41 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:45.605 17:11:41 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:45.605 17:11:41 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:45.605 17:11:41 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:45.605 17:11:41 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:45.605 17:11:41 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:45.605 17:11:41 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:45.605 17:11:41 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:45.605 17:11:41 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:45.605 17:11:41 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:45.605 17:11:41 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:45.605 17:11:41 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:45.605 17:11:41 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:45.605 #define SPDK_CONFIG_H 00:08:45.605 #define SPDK_CONFIG_APPS 1 00:08:45.605 #define SPDK_CONFIG_ARCH native 00:08:45.605 #undef SPDK_CONFIG_ASAN 00:08:45.605 #undef SPDK_CONFIG_AVAHI 00:08:45.605 #undef SPDK_CONFIG_CET 00:08:45.605 #define SPDK_CONFIG_COVERAGE 1 00:08:45.605 #define SPDK_CONFIG_CROSS_PREFIX 00:08:45.605 #undef SPDK_CONFIG_CRYPTO 00:08:45.605 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:45.605 #undef SPDK_CONFIG_CUSTOMOCF 00:08:45.605 #undef SPDK_CONFIG_DAOS 00:08:45.605 #define SPDK_CONFIG_DAOS_DIR 00:08:45.605 #define SPDK_CONFIG_DEBUG 1 00:08:45.605 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:45.605 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:45.605 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:45.605 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:45.605 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:45.605 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:45.605 #define SPDK_CONFIG_EXAMPLES 1 00:08:45.605 #undef SPDK_CONFIG_FC 00:08:45.605 #define SPDK_CONFIG_FC_PATH 00:08:45.605 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:45.605 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:45.606 #undef SPDK_CONFIG_FUSE 00:08:45.606 #undef SPDK_CONFIG_FUZZER 00:08:45.606 #define SPDK_CONFIG_FUZZER_LIB 00:08:45.606 #undef SPDK_CONFIG_GOLANG 00:08:45.606 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:45.606 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:45.606 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:45.606 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:45.606 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:45.606 #define SPDK_CONFIG_IDXD 1 00:08:45.606 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:45.606 #undef SPDK_CONFIG_IPSEC_MB 00:08:45.606 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:45.606 #define SPDK_CONFIG_ISAL 1 00:08:45.606 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:45.606 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:45.606 #define SPDK_CONFIG_LIBDIR 00:08:45.606 #undef SPDK_CONFIG_LTO 00:08:45.606 #define SPDK_CONFIG_MAX_LCORES 00:08:45.606 #define SPDK_CONFIG_NVME_CUSE 1 00:08:45.606 #undef SPDK_CONFIG_OCF 00:08:45.606 #define SPDK_CONFIG_OCF_PATH 00:08:45.606 #define SPDK_CONFIG_OPENSSL_PATH 00:08:45.606 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:45.606 #undef SPDK_CONFIG_PGO_USE 00:08:45.606 #define SPDK_CONFIG_PREFIX /usr/local 00:08:45.606 #undef SPDK_CONFIG_RAID5F 00:08:45.606 #undef SPDK_CONFIG_RBD 00:08:45.606 #define SPDK_CONFIG_RDMA 1 00:08:45.606 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:45.606 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:45.606 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:45.606 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:45.606 #define SPDK_CONFIG_SHARED 1 00:08:45.606 #undef SPDK_CONFIG_SMA 00:08:45.606 #define SPDK_CONFIG_TESTS 1 00:08:45.606 #undef SPDK_CONFIG_TSAN 00:08:45.606 #define SPDK_CONFIG_UBLK 1 00:08:45.606 #define SPDK_CONFIG_UBSAN 1 00:08:45.606 #undef SPDK_CONFIG_UNIT_TESTS 00:08:45.606 #undef SPDK_CONFIG_URING 00:08:45.606 #define SPDK_CONFIG_URING_PATH 00:08:45.606 #undef SPDK_CONFIG_URING_ZNS 00:08:45.606 #undef SPDK_CONFIG_USDT 00:08:45.606 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:45.606 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:45.606 #undef SPDK_CONFIG_VFIO_USER 00:08:45.606 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:45.606 #define SPDK_CONFIG_VHOST 1 00:08:45.606 #define SPDK_CONFIG_VIRTIO 1 00:08:45.606 #undef SPDK_CONFIG_VTUNE 00:08:45.606 #define SPDK_CONFIG_VTUNE_DIR 00:08:45.606 #define SPDK_CONFIG_WERROR 1 00:08:45.606 #define SPDK_CONFIG_WPDK_DIR 00:08:45.606 #undef SPDK_CONFIG_XNVME 00:08:45.606 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:45.606 17:11:41 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:45.606 17:11:41 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:45.606 17:11:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.606 17:11:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.606 17:11:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.606 17:11:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.606 17:11:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.606 17:11:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.606 17:11:41 -- paths/export.sh@5 -- # export PATH 00:08:45.606 17:11:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.606 17:11:41 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:45.606 17:11:41 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:45.606 17:11:41 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:45.606 17:11:41 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:45.606 17:11:41 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:45.606 17:11:41 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:45.606 17:11:41 -- pm/common@16 -- # TEST_TAG=N/A 00:08:45.606 17:11:41 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:45.606 17:11:41 -- common/autotest_common.sh@52 -- # : 1 00:08:45.606 17:11:41 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:45.606 17:11:41 -- common/autotest_common.sh@56 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:45.606 17:11:41 -- common/autotest_common.sh@58 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:45.606 17:11:41 -- common/autotest_common.sh@60 -- # : 1 00:08:45.606 17:11:41 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:45.606 17:11:41 -- common/autotest_common.sh@62 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:45.606 17:11:41 -- common/autotest_common.sh@64 -- # : 00:08:45.606 17:11:41 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:45.606 17:11:41 -- common/autotest_common.sh@66 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:45.606 17:11:41 -- common/autotest_common.sh@68 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:45.606 17:11:41 -- common/autotest_common.sh@70 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:45.606 17:11:41 -- common/autotest_common.sh@72 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:45.606 17:11:41 -- common/autotest_common.sh@74 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:45.606 17:11:41 -- common/autotest_common.sh@76 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:45.606 17:11:41 -- common/autotest_common.sh@78 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:45.606 17:11:41 -- common/autotest_common.sh@80 -- # : 1 00:08:45.606 17:11:41 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:45.606 17:11:41 -- common/autotest_common.sh@82 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:45.606 17:11:41 -- common/autotest_common.sh@84 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:45.606 17:11:41 -- common/autotest_common.sh@86 -- # : 1 00:08:45.606 17:11:41 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:45.606 17:11:41 -- common/autotest_common.sh@88 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:45.606 17:11:41 -- common/autotest_common.sh@90 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:45.606 17:11:41 -- common/autotest_common.sh@92 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:45.606 17:11:41 -- common/autotest_common.sh@94 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:45.606 17:11:41 -- common/autotest_common.sh@96 -- # : rdma 00:08:45.606 17:11:41 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:45.606 17:11:41 -- common/autotest_common.sh@98 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:45.606 17:11:41 -- common/autotest_common.sh@100 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:45.606 17:11:41 -- common/autotest_common.sh@102 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:45.606 17:11:41 -- common/autotest_common.sh@104 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:45.606 17:11:41 -- common/autotest_common.sh@106 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:45.606 17:11:41 -- common/autotest_common.sh@108 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:45.606 17:11:41 -- common/autotest_common.sh@110 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:45.606 17:11:41 -- common/autotest_common.sh@112 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:45.606 17:11:41 -- common/autotest_common.sh@114 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:45.606 17:11:41 -- common/autotest_common.sh@116 -- # : 1 00:08:45.606 17:11:41 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:45.606 17:11:41 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:45.606 17:11:41 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:45.606 17:11:41 -- common/autotest_common.sh@120 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:45.606 17:11:41 -- common/autotest_common.sh@122 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:45.606 17:11:41 -- common/autotest_common.sh@124 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:45.606 17:11:41 -- common/autotest_common.sh@126 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:45.606 17:11:41 -- common/autotest_common.sh@128 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:45.606 17:11:41 -- common/autotest_common.sh@130 -- # : 0 00:08:45.606 17:11:41 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:45.606 17:11:41 -- common/autotest_common.sh@132 -- # : v22.11.4 00:08:45.606 17:11:41 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:45.606 17:11:41 -- common/autotest_common.sh@134 -- # : true 00:08:45.607 17:11:41 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:45.607 17:11:41 -- common/autotest_common.sh@136 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:45.607 17:11:41 -- common/autotest_common.sh@138 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:45.607 17:11:41 -- common/autotest_common.sh@140 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:45.607 17:11:41 -- common/autotest_common.sh@142 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:45.607 17:11:41 -- common/autotest_common.sh@144 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:45.607 17:11:41 -- common/autotest_common.sh@146 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:45.607 17:11:41 -- common/autotest_common.sh@148 -- # : mlx5 00:08:45.607 17:11:41 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:45.607 17:11:41 -- common/autotest_common.sh@150 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:45.607 17:11:41 -- common/autotest_common.sh@152 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:45.607 17:11:41 -- common/autotest_common.sh@154 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:45.607 17:11:41 -- common/autotest_common.sh@156 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:45.607 17:11:41 -- common/autotest_common.sh@158 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:45.607 17:11:41 -- common/autotest_common.sh@160 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:45.607 17:11:41 -- common/autotest_common.sh@163 -- # : 00:08:45.607 17:11:41 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:45.607 17:11:41 -- common/autotest_common.sh@165 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:45.607 17:11:41 -- common/autotest_common.sh@167 -- # : 0 00:08:45.607 17:11:41 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:45.607 17:11:41 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:45.607 17:11:41 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:45.607 17:11:41 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:45.607 17:11:41 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:45.607 17:11:41 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:45.607 17:11:41 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:45.607 17:11:41 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:45.607 17:11:41 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:45.607 17:11:41 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:45.607 17:11:41 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:45.607 17:11:41 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:45.607 17:11:41 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:45.607 17:11:41 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:45.607 17:11:41 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:45.607 17:11:41 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:45.607 17:11:41 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:45.607 17:11:41 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:45.607 17:11:41 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:45.607 17:11:41 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:45.607 17:11:41 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:45.607 17:11:41 -- common/autotest_common.sh@196 -- # cat 00:08:45.607 17:11:41 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:45.607 17:11:41 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:45.607 17:11:41 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:45.607 17:11:41 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:45.607 17:11:41 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:45.607 17:11:41 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:45.607 17:11:41 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:45.607 17:11:41 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:45.607 17:11:41 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:45.607 17:11:41 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:45.607 17:11:41 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:45.607 17:11:41 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:45.607 17:11:41 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:45.607 17:11:41 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:45.607 17:11:41 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:45.607 17:11:41 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:45.607 17:11:41 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:45.607 17:11:41 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:45.607 17:11:41 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:45.607 17:11:41 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:45.607 17:11:41 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:45.607 17:11:41 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:45.607 17:11:41 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:45.607 17:11:41 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:45.607 17:11:41 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:45.607 17:11:41 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:45.607 17:11:41 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:45.607 17:11:41 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:45.607 17:11:41 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:45.607 17:11:41 -- common/autotest_common.sh@259 -- # valgrind= 00:08:45.607 17:11:41 -- common/autotest_common.sh@265 -- # uname -s 00:08:45.607 17:11:41 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:45.607 17:11:41 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:45.607 17:11:41 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:45.607 17:11:41 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:45.607 17:11:41 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:45.607 17:11:41 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:45.607 17:11:41 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:45.607 17:11:41 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:08:45.607 17:11:41 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:45.607 17:11:41 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:45.607 17:11:41 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:45.607 17:11:41 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:45.607 17:11:41 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:45.607 17:11:41 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:45.607 17:11:41 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:45.607 17:11:41 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=rdma 00:08:45.607 17:11:41 -- common/autotest_common.sh@319 -- # [[ -z 1214100 ]] 00:08:45.607 17:11:41 -- common/autotest_common.sh@319 -- # kill -0 1214100 00:08:45.607 17:11:41 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:45.607 17:11:41 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:45.607 17:11:41 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:45.607 17:11:41 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:45.607 17:11:41 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:45.607 17:11:41 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:45.607 17:11:41 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:45.607 17:11:41 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:45.607 17:11:41 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.Qx4Kve 00:08:45.607 17:11:41 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:45.607 17:11:41 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:45.608 17:11:41 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:45.608 17:11:41 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Qx4Kve/tests/target /tmp/spdk.Qx4Kve 00:08:45.608 17:11:41 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:45.608 17:11:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:45.608 17:11:41 -- common/autotest_common.sh@328 -- # df -T 00:08:45.608 17:11:41 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:08:45.608 17:11:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:45.608 17:11:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=422735872 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:08:45.608 17:11:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=4861693952 00:08:45.608 17:11:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=54889541632 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61730607104 00:08:45.608 17:11:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=6841065472 00:08:45.608 17:11:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=30864044032 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865301504 00:08:45.608 17:11:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:45.608 17:11:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=12336685056 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12346122240 00:08:45.608 17:11:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=9437184 00:08:45.608 17:11:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=30865088512 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865305600 00:08:45.608 17:11:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=217088 00:08:45.608 17:11:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:45.608 17:11:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=6173044736 00:08:45.608 17:11:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6173057024 00:08:45.608 17:11:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:45.608 17:11:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:45.608 17:11:41 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:45.608 * Looking for test storage... 00:08:45.608 17:11:41 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:45.608 17:11:41 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:45.608 17:11:41 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:45.608 17:11:41 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:45.608 17:11:41 -- common/autotest_common.sh@373 -- # mount=/ 00:08:45.608 17:11:41 -- common/autotest_common.sh@375 -- # target_space=54889541632 00:08:45.608 17:11:41 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:45.608 17:11:41 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:45.608 17:11:41 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:08:45.608 17:11:41 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:08:45.608 17:11:41 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:08:45.608 17:11:41 -- common/autotest_common.sh@382 -- # new_size=9055657984 00:08:45.608 17:11:41 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:45.608 17:11:41 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:45.608 17:11:41 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:45.608 17:11:41 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:45.608 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:45.608 17:11:41 -- common/autotest_common.sh@390 -- # return 0 00:08:45.608 17:11:41 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:45.608 17:11:41 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:45.608 17:11:41 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:45.608 17:11:41 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:45.608 17:11:41 -- common/autotest_common.sh@1682 -- # true 00:08:45.608 17:11:41 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:45.608 17:11:41 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:45.608 17:11:41 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:45.608 17:11:41 -- common/autotest_common.sh@27 -- # exec 00:08:45.608 17:11:41 -- common/autotest_common.sh@29 -- # exec 00:08:45.608 17:11:41 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:45.608 17:11:41 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:45.608 17:11:41 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:45.608 17:11:41 -- common/autotest_common.sh@18 -- # set -x 00:08:45.608 17:11:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:45.608 17:11:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:45.608 17:11:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:45.608 17:11:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:45.608 17:11:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:45.608 17:11:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:45.608 17:11:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:45.608 17:11:41 -- scripts/common.sh@335 -- # IFS=.-: 00:08:45.608 17:11:41 -- scripts/common.sh@335 -- # read -ra ver1 00:08:45.608 17:11:41 -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.608 17:11:41 -- scripts/common.sh@336 -- # read -ra ver2 00:08:45.608 17:11:41 -- scripts/common.sh@337 -- # local 'op=<' 00:08:45.608 17:11:41 -- scripts/common.sh@339 -- # ver1_l=2 00:08:45.608 17:11:41 -- scripts/common.sh@340 -- # ver2_l=1 00:08:45.608 17:11:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:45.608 17:11:41 -- scripts/common.sh@343 -- # case "$op" in 00:08:45.608 17:11:41 -- scripts/common.sh@344 -- # : 1 00:08:45.608 17:11:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:45.608 17:11:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.608 17:11:41 -- scripts/common.sh@364 -- # decimal 1 00:08:45.608 17:11:41 -- scripts/common.sh@352 -- # local d=1 00:08:45.608 17:11:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.608 17:11:41 -- scripts/common.sh@354 -- # echo 1 00:08:45.608 17:11:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:45.608 17:11:41 -- scripts/common.sh@365 -- # decimal 2 00:08:45.608 17:11:41 -- scripts/common.sh@352 -- # local d=2 00:08:45.608 17:11:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.608 17:11:41 -- scripts/common.sh@354 -- # echo 2 00:08:45.608 17:11:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:45.608 17:11:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:45.608 17:11:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:45.608 17:11:41 -- scripts/common.sh@367 -- # return 0 00:08:45.608 17:11:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.608 17:11:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:45.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.608 --rc genhtml_branch_coverage=1 00:08:45.608 --rc genhtml_function_coverage=1 00:08:45.608 --rc genhtml_legend=1 00:08:45.608 --rc geninfo_all_blocks=1 00:08:45.608 --rc geninfo_unexecuted_blocks=1 00:08:45.608 00:08:45.608 ' 00:08:45.608 17:11:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:45.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.608 --rc genhtml_branch_coverage=1 00:08:45.608 --rc genhtml_function_coverage=1 00:08:45.608 --rc genhtml_legend=1 00:08:45.608 --rc geninfo_all_blocks=1 00:08:45.608 --rc geninfo_unexecuted_blocks=1 00:08:45.608 00:08:45.608 ' 00:08:45.608 17:11:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:45.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.608 --rc genhtml_branch_coverage=1 00:08:45.609 --rc genhtml_function_coverage=1 00:08:45.609 --rc genhtml_legend=1 00:08:45.609 --rc geninfo_all_blocks=1 00:08:45.609 --rc geninfo_unexecuted_blocks=1 00:08:45.609 00:08:45.609 ' 00:08:45.609 17:11:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.609 --rc genhtml_branch_coverage=1 00:08:45.609 --rc genhtml_function_coverage=1 00:08:45.609 --rc genhtml_legend=1 00:08:45.609 --rc geninfo_all_blocks=1 00:08:45.609 --rc geninfo_unexecuted_blocks=1 00:08:45.609 00:08:45.609 ' 00:08:45.609 17:11:41 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.609 17:11:41 -- nvmf/common.sh@7 -- # uname -s 00:08:45.609 17:11:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.609 17:11:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.609 17:11:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.609 17:11:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.609 17:11:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.609 17:11:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.609 17:11:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.609 17:11:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.609 17:11:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.609 17:11:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.609 17:11:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:45.609 17:11:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:45.609 17:11:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.609 17:11:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.609 17:11:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.609 17:11:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:45.609 17:11:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.609 17:11:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.609 17:11:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.609 17:11:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.609 17:11:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.609 17:11:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.609 17:11:41 -- paths/export.sh@5 -- # export PATH 00:08:45.609 17:11:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.609 17:11:41 -- nvmf/common.sh@46 -- # : 0 00:08:45.609 17:11:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:45.609 17:11:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:45.609 17:11:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:45.609 17:11:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.609 17:11:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.609 17:11:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:45.609 17:11:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:45.609 17:11:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:45.609 17:11:41 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:45.609 17:11:41 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:45.609 17:11:41 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:45.609 17:11:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:45.609 17:11:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.609 17:11:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:45.609 17:11:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:45.609 17:11:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:45.609 17:11:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.609 17:11:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.609 17:11:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.609 17:11:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:45.609 17:11:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:45.609 17:11:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:45.609 17:11:41 -- common/autotest_common.sh@10 -- # set +x 00:08:52.176 17:11:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:52.176 17:11:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:52.176 17:11:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:52.176 17:11:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:52.176 17:11:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:52.176 17:11:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:52.176 17:11:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:52.176 17:11:48 -- nvmf/common.sh@294 -- # net_devs=() 00:08:52.176 17:11:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:52.176 17:11:48 -- nvmf/common.sh@295 -- # e810=() 00:08:52.176 17:11:48 -- nvmf/common.sh@295 -- # local -ga e810 00:08:52.176 17:11:48 -- nvmf/common.sh@296 -- # x722=() 00:08:52.176 17:11:48 -- nvmf/common.sh@296 -- # local -ga x722 00:08:52.176 17:11:48 -- nvmf/common.sh@297 -- # mlx=() 00:08:52.176 17:11:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:52.176 17:11:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.176 17:11:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.176 17:11:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.176 17:11:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.176 17:11:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.176 17:11:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.176 17:11:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.176 17:11:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.176 17:11:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.176 17:11:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.176 17:11:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.176 17:11:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:52.176 17:11:48 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:52.176 17:11:48 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:52.176 17:11:48 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:52.176 17:11:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:52.176 17:11:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:52.176 17:11:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:52.176 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:52.176 17:11:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:52.176 17:11:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:52.176 17:11:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:52.176 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:52.176 17:11:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:52.176 17:11:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:52.176 17:11:48 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:52.176 17:11:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.176 17:11:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:52.176 17:11:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.176 17:11:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:52.176 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:52.176 17:11:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.176 17:11:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:52.176 17:11:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.176 17:11:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:52.176 17:11:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.176 17:11:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:52.176 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:52.176 17:11:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.176 17:11:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:52.176 17:11:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:52.176 17:11:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:52.176 17:11:48 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:52.176 17:11:48 -- nvmf/common.sh@57 -- # uname 00:08:52.176 17:11:48 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:52.176 17:11:48 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:52.176 17:11:48 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:52.176 17:11:48 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:52.176 17:11:48 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:52.176 17:11:48 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:52.176 17:11:48 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:52.176 17:11:48 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:52.176 17:11:48 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:52.176 17:11:48 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:52.176 17:11:48 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:52.176 17:11:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:52.176 17:11:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:52.176 17:11:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:52.176 17:11:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:52.176 17:11:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:52.176 17:11:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:52.176 17:11:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.176 17:11:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:52.176 17:11:48 -- nvmf/common.sh@104 -- # continue 2 00:08:52.176 17:11:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:52.176 17:11:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.176 17:11:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.176 17:11:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:52.176 17:11:48 -- nvmf/common.sh@104 -- # continue 2 00:08:52.176 17:11:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:52.176 17:11:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:52.176 17:11:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:52.176 17:11:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:52.176 17:11:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:52.176 17:11:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:52.176 17:11:48 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:52.176 17:11:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:52.176 17:11:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:52.176 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:52.176 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:52.176 altname enp217s0f0np0 00:08:52.176 altname ens818f0np0 00:08:52.177 inet 192.168.100.8/24 scope global mlx_0_0 00:08:52.177 valid_lft forever preferred_lft forever 00:08:52.177 17:11:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:52.177 17:11:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:52.177 17:11:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:52.177 17:11:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:52.177 17:11:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:52.177 17:11:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:52.177 17:11:48 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:52.177 17:11:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:52.177 17:11:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:52.177 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:52.177 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:52.177 altname enp217s0f1np1 00:08:52.177 altname ens818f1np1 00:08:52.177 inet 192.168.100.9/24 scope global mlx_0_1 00:08:52.177 valid_lft forever preferred_lft forever 00:08:52.177 17:11:48 -- nvmf/common.sh@410 -- # return 0 00:08:52.177 17:11:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:52.177 17:11:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:52.177 17:11:48 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:52.177 17:11:48 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:52.177 17:11:48 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:52.177 17:11:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:52.177 17:11:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:52.177 17:11:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:52.177 17:11:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:52.177 17:11:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:52.177 17:11:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:52.177 17:11:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.177 17:11:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:52.177 17:11:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:52.177 17:11:48 -- nvmf/common.sh@104 -- # continue 2 00:08:52.177 17:11:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:52.177 17:11:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.177 17:11:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:52.177 17:11:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.177 17:11:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:52.177 17:11:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:52.177 17:11:48 -- nvmf/common.sh@104 -- # continue 2 00:08:52.177 17:11:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:52.177 17:11:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:52.177 17:11:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:52.177 17:11:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:52.177 17:11:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:52.177 17:11:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:52.177 17:11:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:52.177 17:11:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:52.177 17:11:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:52.177 17:11:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:52.177 17:11:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:52.177 17:11:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:52.177 17:11:48 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:52.177 192.168.100.9' 00:08:52.177 17:11:48 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:52.177 192.168.100.9' 00:08:52.177 17:11:48 -- nvmf/common.sh@445 -- # head -n 1 00:08:52.177 17:11:48 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:52.177 17:11:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:52.177 192.168.100.9' 00:08:52.177 17:11:48 -- nvmf/common.sh@446 -- # head -n 1 00:08:52.177 17:11:48 -- nvmf/common.sh@446 -- # tail -n +2 00:08:52.177 17:11:48 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:52.177 17:11:48 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:52.177 17:11:48 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:52.177 17:11:48 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:52.177 17:11:48 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:52.177 17:11:48 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:52.177 17:11:48 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:52.177 17:11:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:52.177 17:11:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:52.177 17:11:48 -- common/autotest_common.sh@10 -- # set +x 00:08:52.177 ************************************ 00:08:52.177 START TEST nvmf_filesystem_no_in_capsule 00:08:52.177 ************************************ 00:08:52.177 17:11:48 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:52.177 17:11:48 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:52.177 17:11:48 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:52.177 17:11:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:52.177 17:11:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:52.177 17:11:48 -- common/autotest_common.sh@10 -- # set +x 00:08:52.177 17:11:48 -- nvmf/common.sh@469 -- # nvmfpid=1217297 00:08:52.177 17:11:48 -- nvmf/common.sh@470 -- # waitforlisten 1217297 00:08:52.177 17:11:48 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.177 17:11:48 -- common/autotest_common.sh@829 -- # '[' -z 1217297 ']' 00:08:52.177 17:11:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.177 17:11:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.177 17:11:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.177 17:11:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.177 17:11:48 -- common/autotest_common.sh@10 -- # set +x 00:08:52.177 [2024-12-14 17:11:48.590640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:52.177 [2024-12-14 17:11:48.590697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.177 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.177 [2024-12-14 17:11:48.661383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.177 [2024-12-14 17:11:48.700004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:52.177 [2024-12-14 17:11:48.700133] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.177 [2024-12-14 17:11:48.700144] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.177 [2024-12-14 17:11:48.700153] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.177 [2024-12-14 17:11:48.700207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.177 [2024-12-14 17:11:48.700301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.177 [2024-12-14 17:11:48.700409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.177 [2024-12-14 17:11:48.700410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.108 17:11:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.108 17:11:49 -- common/autotest_common.sh@862 -- # return 0 00:08:53.108 17:11:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:53.108 17:11:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:53.108 17:11:49 -- common/autotest_common.sh@10 -- # set +x 00:08:53.108 17:11:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.108 17:11:49 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:53.108 17:11:49 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:53.108 17:11:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.108 17:11:49 -- common/autotest_common.sh@10 -- # set +x 00:08:53.108 [2024-12-14 17:11:49.475981] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:53.108 [2024-12-14 17:11:49.496939] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xefb0f0/0xeff5c0) succeed. 00:08:53.108 [2024-12-14 17:11:49.506115] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xefc690/0xf40c60) succeed. 00:08:53.108 17:11:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.108 17:11:49 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:53.108 17:11:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.108 17:11:49 -- common/autotest_common.sh@10 -- # set +x 00:08:53.108 Malloc1 00:08:53.108 17:11:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.108 17:11:49 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:53.108 17:11:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.108 17:11:49 -- common/autotest_common.sh@10 -- # set +x 00:08:53.109 17:11:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.109 17:11:49 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:53.109 17:11:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.109 17:11:49 -- common/autotest_common.sh@10 -- # set +x 00:08:53.109 17:11:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.109 17:11:49 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:53.109 17:11:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.109 17:11:49 -- common/autotest_common.sh@10 -- # set +x 00:08:53.109 [2024-12-14 17:11:49.752781] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:53.109 17:11:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.109 17:11:49 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:53.109 17:11:49 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:53.109 17:11:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:53.109 17:11:49 -- common/autotest_common.sh@1369 -- # local bs 00:08:53.109 17:11:49 -- common/autotest_common.sh@1370 -- # local nb 00:08:53.109 17:11:49 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:53.109 17:11:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.109 17:11:49 -- common/autotest_common.sh@10 -- # set +x 00:08:53.109 17:11:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.109 17:11:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:53.109 { 00:08:53.109 "name": "Malloc1", 00:08:53.109 "aliases": [ 00:08:53.109 "5bb9b0d5-7bfc-4c3d-ae8a-36a05617593f" 00:08:53.109 ], 00:08:53.109 "product_name": "Malloc disk", 00:08:53.109 "block_size": 512, 00:08:53.109 "num_blocks": 1048576, 00:08:53.109 "uuid": "5bb9b0d5-7bfc-4c3d-ae8a-36a05617593f", 00:08:53.109 "assigned_rate_limits": { 00:08:53.109 "rw_ios_per_sec": 0, 00:08:53.109 "rw_mbytes_per_sec": 0, 00:08:53.109 "r_mbytes_per_sec": 0, 00:08:53.109 "w_mbytes_per_sec": 0 00:08:53.109 }, 00:08:53.109 "claimed": true, 00:08:53.109 "claim_type": "exclusive_write", 00:08:53.109 "zoned": false, 00:08:53.109 "supported_io_types": { 00:08:53.109 "read": true, 00:08:53.109 "write": true, 00:08:53.109 "unmap": true, 00:08:53.109 "write_zeroes": true, 00:08:53.109 "flush": true, 00:08:53.109 "reset": true, 00:08:53.109 "compare": false, 00:08:53.109 "compare_and_write": false, 00:08:53.109 "abort": true, 00:08:53.109 "nvme_admin": false, 00:08:53.109 "nvme_io": false 00:08:53.109 }, 00:08:53.109 "memory_domains": [ 00:08:53.109 { 00:08:53.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.109 "dma_device_type": 2 00:08:53.109 } 00:08:53.109 ], 00:08:53.109 "driver_specific": {} 00:08:53.109 } 00:08:53.109 ]' 00:08:53.109 17:11:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:53.367 17:11:49 -- common/autotest_common.sh@1372 -- # bs=512 00:08:53.367 17:11:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:53.367 17:11:49 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:53.367 17:11:49 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:53.367 17:11:49 -- common/autotest_common.sh@1377 -- # echo 512 00:08:53.367 17:11:49 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:53.367 17:11:49 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:54.303 17:11:50 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.304 17:11:50 -- common/autotest_common.sh@1187 -- # local i=0 00:08:54.304 17:11:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.304 17:11:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:54.304 17:11:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:56.840 17:11:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:56.840 17:11:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:56.840 17:11:52 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.840 17:11:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:56.840 17:11:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.840 17:11:52 -- common/autotest_common.sh@1197 -- # return 0 00:08:56.840 17:11:52 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:56.840 17:11:52 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:56.840 17:11:52 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:56.840 17:11:52 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:56.840 17:11:52 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:56.840 17:11:52 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:56.840 17:11:52 -- setup/common.sh@80 -- # echo 536870912 00:08:56.840 17:11:52 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:56.840 17:11:52 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:56.840 17:11:52 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:56.840 17:11:52 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:56.840 17:11:52 -- target/filesystem.sh@69 -- # partprobe 00:08:56.840 17:11:53 -- target/filesystem.sh@70 -- # sleep 1 00:08:57.778 17:11:54 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:57.778 17:11:54 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:57.778 17:11:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:57.778 17:11:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.778 17:11:54 -- common/autotest_common.sh@10 -- # set +x 00:08:57.778 ************************************ 00:08:57.778 START TEST filesystem_ext4 00:08:57.778 ************************************ 00:08:57.778 17:11:54 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:57.778 17:11:54 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:57.778 17:11:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:57.778 17:11:54 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:57.778 17:11:54 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:57.778 17:11:54 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:57.778 17:11:54 -- common/autotest_common.sh@914 -- # local i=0 00:08:57.778 17:11:54 -- common/autotest_common.sh@915 -- # local force 00:08:57.778 17:11:54 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:57.778 17:11:54 -- common/autotest_common.sh@918 -- # force=-F 00:08:57.778 17:11:54 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:57.778 mke2fs 1.47.0 (5-Feb-2023) 00:08:57.778 Discarding device blocks: 0/522240 done 00:08:57.778 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:57.778 Filesystem UUID: edc0796b-4f2d-45b4-849d-afee281c4394 00:08:57.778 Superblock backups stored on blocks: 00:08:57.778 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:57.778 00:08:57.778 Allocating group tables: 0/64 done 00:08:57.778 Writing inode tables: 0/64 done 00:08:57.778 Creating journal (8192 blocks): done 00:08:57.778 Writing superblocks and filesystem accounting information: 0/64 done 00:08:57.778 00:08:57.778 17:11:54 -- common/autotest_common.sh@931 -- # return 0 00:08:57.778 17:11:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:57.778 17:11:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:57.778 17:11:54 -- target/filesystem.sh@25 -- # sync 00:08:57.778 17:11:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:57.778 17:11:54 -- target/filesystem.sh@27 -- # sync 00:08:57.778 17:11:54 -- target/filesystem.sh@29 -- # i=0 00:08:57.778 17:11:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:57.778 17:11:54 -- target/filesystem.sh@37 -- # kill -0 1217297 00:08:57.778 17:11:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:57.778 17:11:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:57.778 17:11:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:57.778 17:11:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:57.778 00:08:57.778 real 0m0.210s 00:08:57.778 user 0m0.027s 00:08:57.778 sys 0m0.081s 00:08:57.778 17:11:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.778 17:11:54 -- common/autotest_common.sh@10 -- # set +x 00:08:57.778 ************************************ 00:08:57.778 END TEST filesystem_ext4 00:08:57.778 ************************************ 00:08:57.778 17:11:54 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:57.778 17:11:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:57.778 17:11:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.778 17:11:54 -- common/autotest_common.sh@10 -- # set +x 00:08:57.778 ************************************ 00:08:57.778 START TEST filesystem_btrfs 00:08:57.778 ************************************ 00:08:57.778 17:11:54 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:57.778 17:11:54 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:57.778 17:11:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:57.778 17:11:54 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:57.778 17:11:54 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:57.778 17:11:54 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:57.778 17:11:54 -- common/autotest_common.sh@914 -- # local i=0 00:08:57.778 17:11:54 -- common/autotest_common.sh@915 -- # local force 00:08:57.778 17:11:54 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:57.778 17:11:54 -- common/autotest_common.sh@920 -- # force=-f 00:08:57.778 17:11:54 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:58.038 btrfs-progs v6.8.1 00:08:58.038 See https://btrfs.readthedocs.io for more information. 00:08:58.038 00:08:58.038 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:58.038 NOTE: several default settings have changed in version 5.15, please make sure 00:08:58.038 this does not affect your deployments: 00:08:58.038 - DUP for metadata (-m dup) 00:08:58.038 - enabled no-holes (-O no-holes) 00:08:58.038 - enabled free-space-tree (-R free-space-tree) 00:08:58.038 00:08:58.038 Label: (null) 00:08:58.038 UUID: 8995f9b9-7e13-4217-9e26-db5fd1c0682b 00:08:58.038 Node size: 16384 00:08:58.038 Sector size: 4096 (CPU page size: 4096) 00:08:58.038 Filesystem size: 510.00MiB 00:08:58.038 Block group profiles: 00:08:58.038 Data: single 8.00MiB 00:08:58.038 Metadata: DUP 32.00MiB 00:08:58.038 System: DUP 8.00MiB 00:08:58.038 SSD detected: yes 00:08:58.038 Zoned device: no 00:08:58.038 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:58.038 Checksum: crc32c 00:08:58.038 Number of devices: 1 00:08:58.038 Devices: 00:08:58.038 ID SIZE PATH 00:08:58.038 1 510.00MiB /dev/nvme0n1p1 00:08:58.038 00:08:58.038 17:11:54 -- common/autotest_common.sh@931 -- # return 0 00:08:58.038 17:11:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:58.038 17:11:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:58.038 17:11:54 -- target/filesystem.sh@25 -- # sync 00:08:58.038 17:11:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:58.038 17:11:54 -- target/filesystem.sh@27 -- # sync 00:08:58.038 17:11:54 -- target/filesystem.sh@29 -- # i=0 00:08:58.038 17:11:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:58.038 17:11:54 -- target/filesystem.sh@37 -- # kill -0 1217297 00:08:58.038 17:11:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:58.038 17:11:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:58.038 17:11:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:58.038 17:11:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:58.038 00:08:58.038 real 0m0.249s 00:08:58.038 user 0m0.024s 00:08:58.038 sys 0m0.130s 00:08:58.038 17:11:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:58.038 17:11:54 -- common/autotest_common.sh@10 -- # set +x 00:08:58.038 ************************************ 00:08:58.038 END TEST filesystem_btrfs 00:08:58.038 ************************************ 00:08:58.038 17:11:54 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:58.038 17:11:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:58.038 17:11:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.038 17:11:54 -- common/autotest_common.sh@10 -- # set +x 00:08:58.038 ************************************ 00:08:58.038 START TEST filesystem_xfs 00:08:58.038 ************************************ 00:08:58.038 17:11:54 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:58.038 17:11:54 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:58.038 17:11:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:58.038 17:11:54 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:58.038 17:11:54 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:58.038 17:11:54 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:58.038 17:11:54 -- common/autotest_common.sh@914 -- # local i=0 00:08:58.038 17:11:54 -- common/autotest_common.sh@915 -- # local force 00:08:58.038 17:11:54 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:58.038 17:11:54 -- common/autotest_common.sh@920 -- # force=-f 00:08:58.038 17:11:54 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:58.298 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:58.298 = sectsz=512 attr=2, projid32bit=1 00:08:58.298 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:58.298 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:58.298 data = bsize=4096 blocks=130560, imaxpct=25 00:08:58.298 = sunit=0 swidth=0 blks 00:08:58.298 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:58.298 log =internal log bsize=4096 blocks=16384, version=2 00:08:58.298 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:58.298 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:58.298 Discarding blocks...Done. 00:08:58.298 17:11:54 -- common/autotest_common.sh@931 -- # return 0 00:08:58.298 17:11:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:58.298 17:11:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:58.298 17:11:54 -- target/filesystem.sh@25 -- # sync 00:08:58.298 17:11:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:58.298 17:11:54 -- target/filesystem.sh@27 -- # sync 00:08:58.298 17:11:54 -- target/filesystem.sh@29 -- # i=0 00:08:58.298 17:11:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:58.298 17:11:54 -- target/filesystem.sh@37 -- # kill -0 1217297 00:08:58.298 17:11:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:58.298 17:11:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:58.298 17:11:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:58.298 17:11:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:58.298 00:08:58.298 real 0m0.206s 00:08:58.298 user 0m0.032s 00:08:58.298 sys 0m0.075s 00:08:58.298 17:11:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:58.298 17:11:54 -- common/autotest_common.sh@10 -- # set +x 00:08:58.298 ************************************ 00:08:58.298 END TEST filesystem_xfs 00:08:58.298 ************************************ 00:08:58.298 17:11:54 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:58.298 17:11:54 -- target/filesystem.sh@93 -- # sync 00:08:58.298 17:11:54 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.677 17:11:55 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:59.677 17:11:55 -- common/autotest_common.sh@1208 -- # local i=0 00:08:59.677 17:11:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:59.677 17:11:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.677 17:11:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:59.677 17:11:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.677 17:11:55 -- common/autotest_common.sh@1220 -- # return 0 00:08:59.677 17:11:55 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.677 17:11:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.677 17:11:55 -- common/autotest_common.sh@10 -- # set +x 00:08:59.677 17:11:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.677 17:11:55 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:59.677 17:11:55 -- target/filesystem.sh@101 -- # killprocess 1217297 00:08:59.677 17:11:55 -- common/autotest_common.sh@936 -- # '[' -z 1217297 ']' 00:08:59.677 17:11:55 -- common/autotest_common.sh@940 -- # kill -0 1217297 00:08:59.677 17:11:55 -- common/autotest_common.sh@941 -- # uname 00:08:59.677 17:11:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:59.677 17:11:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1217297 00:08:59.677 17:11:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:59.677 17:11:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:59.677 17:11:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1217297' 00:08:59.677 killing process with pid 1217297 00:08:59.677 17:11:56 -- common/autotest_common.sh@955 -- # kill 1217297 00:08:59.677 17:11:56 -- common/autotest_common.sh@960 -- # wait 1217297 00:08:59.936 17:11:56 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:59.936 00:08:59.937 real 0m7.903s 00:08:59.937 user 0m30.909s 00:08:59.937 sys 0m1.196s 00:08:59.937 17:11:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.937 17:11:56 -- common/autotest_common.sh@10 -- # set +x 00:08:59.937 ************************************ 00:08:59.937 END TEST nvmf_filesystem_no_in_capsule 00:08:59.937 ************************************ 00:08:59.937 17:11:56 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:59.937 17:11:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:59.937 17:11:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.937 17:11:56 -- common/autotest_common.sh@10 -- # set +x 00:08:59.937 ************************************ 00:08:59.937 START TEST nvmf_filesystem_in_capsule 00:08:59.937 ************************************ 00:08:59.937 17:11:56 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:59.937 17:11:56 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:59.937 17:11:56 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:59.937 17:11:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:59.937 17:11:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:59.937 17:11:56 -- common/autotest_common.sh@10 -- # set +x 00:08:59.937 17:11:56 -- nvmf/common.sh@469 -- # nvmfpid=1218927 00:08:59.937 17:11:56 -- nvmf/common.sh@470 -- # waitforlisten 1218927 00:08:59.937 17:11:56 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.937 17:11:56 -- common/autotest_common.sh@829 -- # '[' -z 1218927 ']' 00:08:59.937 17:11:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.937 17:11:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:59.937 17:11:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.937 17:11:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:59.937 17:11:56 -- common/autotest_common.sh@10 -- # set +x 00:08:59.937 [2024-12-14 17:11:56.547801] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:59.937 [2024-12-14 17:11:56.547862] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.937 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.937 [2024-12-14 17:11:56.618934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.196 [2024-12-14 17:11:56.656868] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:00.196 [2024-12-14 17:11:56.656980] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.196 [2024-12-14 17:11:56.656990] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.196 [2024-12-14 17:11:56.656998] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.196 [2024-12-14 17:11:56.657046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.196 [2024-12-14 17:11:56.657160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.197 [2024-12-14 17:11:56.657233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.197 [2024-12-14 17:11:56.657235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.765 17:11:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:00.765 17:11:57 -- common/autotest_common.sh@862 -- # return 0 00:09:00.765 17:11:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:00.765 17:11:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:00.765 17:11:57 -- common/autotest_common.sh@10 -- # set +x 00:09:00.765 17:11:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.765 17:11:57 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:00.765 17:11:57 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:09:00.765 17:11:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.765 17:11:57 -- common/autotest_common.sh@10 -- # set +x 00:09:00.765 [2024-12-14 17:11:57.432791] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17bef30/0x17c3400) succeed. 00:09:00.765 [2024-12-14 17:11:57.442024] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17c04d0/0x1804aa0) succeed. 00:09:01.066 17:11:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.066 17:11:57 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:01.066 17:11:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.066 17:11:57 -- common/autotest_common.sh@10 -- # set +x 00:09:01.066 Malloc1 00:09:01.066 17:11:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.066 17:11:57 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.066 17:11:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.066 17:11:57 -- common/autotest_common.sh@10 -- # set +x 00:09:01.066 17:11:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.066 17:11:57 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:01.066 17:11:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.066 17:11:57 -- common/autotest_common.sh@10 -- # set +x 00:09:01.066 17:11:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.066 17:11:57 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:01.066 17:11:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.066 17:11:57 -- common/autotest_common.sh@10 -- # set +x 00:09:01.066 [2024-12-14 17:11:57.701466] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:01.066 17:11:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.066 17:11:57 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:01.066 17:11:57 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:09:01.066 17:11:57 -- common/autotest_common.sh@1368 -- # local bdev_info 00:09:01.066 17:11:57 -- common/autotest_common.sh@1369 -- # local bs 00:09:01.066 17:11:57 -- common/autotest_common.sh@1370 -- # local nb 00:09:01.066 17:11:57 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:01.066 17:11:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.406 17:11:57 -- common/autotest_common.sh@10 -- # set +x 00:09:01.406 17:11:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.406 17:11:57 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:09:01.406 { 00:09:01.406 "name": "Malloc1", 00:09:01.406 "aliases": [ 00:09:01.406 "0a263322-ffc3-4a6c-b0f7-663dacbbc541" 00:09:01.406 ], 00:09:01.406 "product_name": "Malloc disk", 00:09:01.406 "block_size": 512, 00:09:01.406 "num_blocks": 1048576, 00:09:01.406 "uuid": "0a263322-ffc3-4a6c-b0f7-663dacbbc541", 00:09:01.406 "assigned_rate_limits": { 00:09:01.406 "rw_ios_per_sec": 0, 00:09:01.406 "rw_mbytes_per_sec": 0, 00:09:01.406 "r_mbytes_per_sec": 0, 00:09:01.406 "w_mbytes_per_sec": 0 00:09:01.406 }, 00:09:01.406 "claimed": true, 00:09:01.406 "claim_type": "exclusive_write", 00:09:01.406 "zoned": false, 00:09:01.406 "supported_io_types": { 00:09:01.406 "read": true, 00:09:01.406 "write": true, 00:09:01.406 "unmap": true, 00:09:01.406 "write_zeroes": true, 00:09:01.406 "flush": true, 00:09:01.406 "reset": true, 00:09:01.406 "compare": false, 00:09:01.406 "compare_and_write": false, 00:09:01.406 "abort": true, 00:09:01.406 "nvme_admin": false, 00:09:01.406 "nvme_io": false 00:09:01.406 }, 00:09:01.406 "memory_domains": [ 00:09:01.406 { 00:09:01.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.406 "dma_device_type": 2 00:09:01.406 } 00:09:01.406 ], 00:09:01.406 "driver_specific": {} 00:09:01.406 } 00:09:01.406 ]' 00:09:01.406 17:11:57 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:09:01.406 17:11:57 -- common/autotest_common.sh@1372 -- # bs=512 00:09:01.406 17:11:57 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:09:01.406 17:11:57 -- common/autotest_common.sh@1373 -- # nb=1048576 00:09:01.406 17:11:57 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:09:01.406 17:11:57 -- common/autotest_common.sh@1377 -- # echo 512 00:09:01.406 17:11:57 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:01.406 17:11:57 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:02.341 17:11:58 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.341 17:11:58 -- common/autotest_common.sh@1187 -- # local i=0 00:09:02.341 17:11:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.341 17:11:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:02.341 17:11:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:04.242 17:12:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:04.242 17:12:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:04.242 17:12:00 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.242 17:12:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:04.242 17:12:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.242 17:12:00 -- common/autotest_common.sh@1197 -- # return 0 00:09:04.242 17:12:00 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:04.242 17:12:00 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:04.242 17:12:00 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:04.242 17:12:00 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:04.242 17:12:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:04.242 17:12:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:04.242 17:12:00 -- setup/common.sh@80 -- # echo 536870912 00:09:04.242 17:12:00 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:04.242 17:12:00 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:04.242 17:12:00 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:04.242 17:12:00 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:04.242 17:12:00 -- target/filesystem.sh@69 -- # partprobe 00:09:04.500 17:12:01 -- target/filesystem.sh@70 -- # sleep 1 00:09:05.434 17:12:02 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:05.434 17:12:02 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:05.434 17:12:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:05.434 17:12:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.434 17:12:02 -- common/autotest_common.sh@10 -- # set +x 00:09:05.434 ************************************ 00:09:05.434 START TEST filesystem_in_capsule_ext4 00:09:05.434 ************************************ 00:09:05.434 17:12:02 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:05.434 17:12:02 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:05.434 17:12:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:05.434 17:12:02 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:05.434 17:12:02 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:09:05.434 17:12:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:05.434 17:12:02 -- common/autotest_common.sh@914 -- # local i=0 00:09:05.434 17:12:02 -- common/autotest_common.sh@915 -- # local force 00:09:05.434 17:12:02 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:09:05.434 17:12:02 -- common/autotest_common.sh@918 -- # force=-F 00:09:05.434 17:12:02 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:05.434 mke2fs 1.47.0 (5-Feb-2023) 00:09:05.434 Discarding device blocks: 0/522240 done 00:09:05.434 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:05.434 Filesystem UUID: 7f55a9a6-cca6-4eb8-8f4b-5cd0d8627aee 00:09:05.434 Superblock backups stored on blocks: 00:09:05.434 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:05.434 00:09:05.434 Allocating group tables: 0/64 done 00:09:05.434 Writing inode tables: 0/64 done 00:09:05.434 Creating journal (8192 blocks): done 00:09:05.434 Writing superblocks and filesystem accounting information: 0/64 done 00:09:05.434 00:09:05.434 17:12:02 -- common/autotest_common.sh@931 -- # return 0 00:09:05.434 17:12:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:05.693 17:12:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:05.693 17:12:02 -- target/filesystem.sh@25 -- # sync 00:09:05.693 17:12:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:05.693 17:12:02 -- target/filesystem.sh@27 -- # sync 00:09:05.693 17:12:02 -- target/filesystem.sh@29 -- # i=0 00:09:05.693 17:12:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:05.693 17:12:02 -- target/filesystem.sh@37 -- # kill -0 1218927 00:09:05.693 17:12:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:05.693 17:12:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:05.693 17:12:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:05.693 17:12:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:05.693 00:09:05.693 real 0m0.200s 00:09:05.693 user 0m0.034s 00:09:05.693 sys 0m0.069s 00:09:05.693 17:12:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:05.693 17:12:02 -- common/autotest_common.sh@10 -- # set +x 00:09:05.693 ************************************ 00:09:05.693 END TEST filesystem_in_capsule_ext4 00:09:05.693 ************************************ 00:09:05.693 17:12:02 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:05.693 17:12:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:05.693 17:12:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.693 17:12:02 -- common/autotest_common.sh@10 -- # set +x 00:09:05.693 ************************************ 00:09:05.693 START TEST filesystem_in_capsule_btrfs 00:09:05.693 ************************************ 00:09:05.693 17:12:02 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:05.693 17:12:02 -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:05.693 17:12:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:05.693 17:12:02 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:05.693 17:12:02 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:09:05.693 17:12:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:05.693 17:12:02 -- common/autotest_common.sh@914 -- # local i=0 00:09:05.693 17:12:02 -- common/autotest_common.sh@915 -- # local force 00:09:05.693 17:12:02 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:09:05.693 17:12:02 -- common/autotest_common.sh@920 -- # force=-f 00:09:05.693 17:12:02 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:05.693 btrfs-progs v6.8.1 00:09:05.693 See https://btrfs.readthedocs.io for more information. 00:09:05.693 00:09:05.693 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:05.693 NOTE: several default settings have changed in version 5.15, please make sure 00:09:05.693 this does not affect your deployments: 00:09:05.693 - DUP for metadata (-m dup) 00:09:05.693 - enabled no-holes (-O no-holes) 00:09:05.693 - enabled free-space-tree (-R free-space-tree) 00:09:05.693 00:09:05.693 Label: (null) 00:09:05.693 UUID: 786bd23b-3f6d-4419-a4d2-6a34c9606e35 00:09:05.693 Node size: 16384 00:09:05.693 Sector size: 4096 (CPU page size: 4096) 00:09:05.693 Filesystem size: 510.00MiB 00:09:05.693 Block group profiles: 00:09:05.693 Data: single 8.00MiB 00:09:05.693 Metadata: DUP 32.00MiB 00:09:05.693 System: DUP 8.00MiB 00:09:05.693 SSD detected: yes 00:09:05.693 Zoned device: no 00:09:05.693 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:05.693 Checksum: crc32c 00:09:05.693 Number of devices: 1 00:09:05.693 Devices: 00:09:05.693 ID SIZE PATH 00:09:05.693 1 510.00MiB /dev/nvme0n1p1 00:09:05.693 00:09:05.693 17:12:02 -- common/autotest_common.sh@931 -- # return 0 00:09:05.951 17:12:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:05.951 17:12:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:05.951 17:12:02 -- target/filesystem.sh@25 -- # sync 00:09:05.951 17:12:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:05.951 17:12:02 -- target/filesystem.sh@27 -- # sync 00:09:05.951 17:12:02 -- target/filesystem.sh@29 -- # i=0 00:09:05.951 17:12:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:05.951 17:12:02 -- target/filesystem.sh@37 -- # kill -0 1218927 00:09:05.951 17:12:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:05.951 17:12:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:05.951 17:12:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:05.951 17:12:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:05.951 00:09:05.951 real 0m0.255s 00:09:05.951 user 0m0.031s 00:09:05.951 sys 0m0.131s 00:09:05.951 17:12:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:05.951 17:12:02 -- common/autotest_common.sh@10 -- # set +x 00:09:05.951 ************************************ 00:09:05.951 END TEST filesystem_in_capsule_btrfs 00:09:05.951 ************************************ 00:09:05.951 17:12:02 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:05.951 17:12:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:05.951 17:12:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.951 17:12:02 -- common/autotest_common.sh@10 -- # set +x 00:09:05.951 ************************************ 00:09:05.951 START TEST filesystem_in_capsule_xfs 00:09:05.951 ************************************ 00:09:05.951 17:12:02 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:09:05.951 17:12:02 -- target/filesystem.sh@18 -- # fstype=xfs 00:09:05.951 17:12:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:05.952 17:12:02 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:05.952 17:12:02 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:09:05.952 17:12:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:05.952 17:12:02 -- common/autotest_common.sh@914 -- # local i=0 00:09:05.952 17:12:02 -- common/autotest_common.sh@915 -- # local force 00:09:05.952 17:12:02 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:09:05.952 17:12:02 -- common/autotest_common.sh@920 -- # force=-f 00:09:05.952 17:12:02 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:06.210 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:06.210 = sectsz=512 attr=2, projid32bit=1 00:09:06.210 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:06.210 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:06.210 data = bsize=4096 blocks=130560, imaxpct=25 00:09:06.210 = sunit=0 swidth=0 blks 00:09:06.210 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:06.210 log =internal log bsize=4096 blocks=16384, version=2 00:09:06.210 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:06.210 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:06.210 Discarding blocks...Done. 00:09:06.210 17:12:02 -- common/autotest_common.sh@931 -- # return 0 00:09:06.210 17:12:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:06.210 17:12:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:06.210 17:12:02 -- target/filesystem.sh@25 -- # sync 00:09:06.210 17:12:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:06.210 17:12:02 -- target/filesystem.sh@27 -- # sync 00:09:06.210 17:12:02 -- target/filesystem.sh@29 -- # i=0 00:09:06.210 17:12:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:06.210 17:12:02 -- target/filesystem.sh@37 -- # kill -0 1218927 00:09:06.210 17:12:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:06.210 17:12:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:06.210 17:12:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:06.210 17:12:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:06.210 00:09:06.210 real 0m0.208s 00:09:06.210 user 0m0.026s 00:09:06.210 sys 0m0.081s 00:09:06.210 17:12:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:06.210 17:12:02 -- common/autotest_common.sh@10 -- # set +x 00:09:06.210 ************************************ 00:09:06.210 END TEST filesystem_in_capsule_xfs 00:09:06.210 ************************************ 00:09:06.210 17:12:02 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:06.210 17:12:02 -- target/filesystem.sh@93 -- # sync 00:09:06.210 17:12:02 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.143 17:12:03 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.143 17:12:03 -- common/autotest_common.sh@1208 -- # local i=0 00:09:07.401 17:12:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.401 17:12:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:07.401 17:12:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:07.401 17:12:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.401 17:12:03 -- common/autotest_common.sh@1220 -- # return 0 00:09:07.401 17:12:03 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.401 17:12:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.401 17:12:03 -- common/autotest_common.sh@10 -- # set +x 00:09:07.401 17:12:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.401 17:12:03 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:07.401 17:12:03 -- target/filesystem.sh@101 -- # killprocess 1218927 00:09:07.401 17:12:03 -- common/autotest_common.sh@936 -- # '[' -z 1218927 ']' 00:09:07.401 17:12:03 -- common/autotest_common.sh@940 -- # kill -0 1218927 00:09:07.401 17:12:03 -- common/autotest_common.sh@941 -- # uname 00:09:07.401 17:12:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:07.401 17:12:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1218927 00:09:07.401 17:12:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:07.401 17:12:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:07.401 17:12:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1218927' 00:09:07.401 killing process with pid 1218927 00:09:07.401 17:12:03 -- common/autotest_common.sh@955 -- # kill 1218927 00:09:07.401 17:12:03 -- common/autotest_common.sh@960 -- # wait 1218927 00:09:07.969 17:12:04 -- target/filesystem.sh@102 -- # nvmfpid= 00:09:07.969 00:09:07.969 real 0m7.857s 00:09:07.969 user 0m30.685s 00:09:07.969 sys 0m1.203s 00:09:07.969 17:12:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.969 17:12:04 -- common/autotest_common.sh@10 -- # set +x 00:09:07.969 ************************************ 00:09:07.969 END TEST nvmf_filesystem_in_capsule 00:09:07.969 ************************************ 00:09:07.969 17:12:04 -- target/filesystem.sh@108 -- # nvmftestfini 00:09:07.969 17:12:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:07.969 17:12:04 -- nvmf/common.sh@116 -- # sync 00:09:07.969 17:12:04 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:09:07.969 17:12:04 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:09:07.969 17:12:04 -- nvmf/common.sh@119 -- # set +e 00:09:07.969 17:12:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:07.969 17:12:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:09:07.969 rmmod nvme_rdma 00:09:07.969 rmmod nvme_fabrics 00:09:07.969 17:12:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:07.969 17:12:04 -- nvmf/common.sh@123 -- # set -e 00:09:07.969 17:12:04 -- nvmf/common.sh@124 -- # return 0 00:09:07.969 17:12:04 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:09:07.969 17:12:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:07.969 17:12:04 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:09:07.969 00:09:07.969 real 0m22.914s 00:09:07.969 user 1m3.718s 00:09:07.969 sys 0m7.587s 00:09:07.969 17:12:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.969 17:12:04 -- common/autotest_common.sh@10 -- # set +x 00:09:07.969 ************************************ 00:09:07.969 END TEST nvmf_filesystem 00:09:07.969 ************************************ 00:09:07.969 17:12:04 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:07.969 17:12:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:07.969 17:12:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.969 17:12:04 -- common/autotest_common.sh@10 -- # set +x 00:09:07.969 ************************************ 00:09:07.969 START TEST nvmf_discovery 00:09:07.969 ************************************ 00:09:07.969 17:12:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:07.969 * Looking for test storage... 00:09:07.969 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:07.969 17:12:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:07.969 17:12:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:07.969 17:12:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:08.228 17:12:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:08.228 17:12:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:08.228 17:12:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:08.228 17:12:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:08.228 17:12:04 -- scripts/common.sh@335 -- # IFS=.-: 00:09:08.228 17:12:04 -- scripts/common.sh@335 -- # read -ra ver1 00:09:08.228 17:12:04 -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.228 17:12:04 -- scripts/common.sh@336 -- # read -ra ver2 00:09:08.228 17:12:04 -- scripts/common.sh@337 -- # local 'op=<' 00:09:08.228 17:12:04 -- scripts/common.sh@339 -- # ver1_l=2 00:09:08.228 17:12:04 -- scripts/common.sh@340 -- # ver2_l=1 00:09:08.228 17:12:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:08.228 17:12:04 -- scripts/common.sh@343 -- # case "$op" in 00:09:08.228 17:12:04 -- scripts/common.sh@344 -- # : 1 00:09:08.228 17:12:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:08.228 17:12:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.228 17:12:04 -- scripts/common.sh@364 -- # decimal 1 00:09:08.228 17:12:04 -- scripts/common.sh@352 -- # local d=1 00:09:08.228 17:12:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.228 17:12:04 -- scripts/common.sh@354 -- # echo 1 00:09:08.228 17:12:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:08.228 17:12:04 -- scripts/common.sh@365 -- # decimal 2 00:09:08.228 17:12:04 -- scripts/common.sh@352 -- # local d=2 00:09:08.228 17:12:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.228 17:12:04 -- scripts/common.sh@354 -- # echo 2 00:09:08.228 17:12:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:08.228 17:12:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:08.228 17:12:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:08.228 17:12:04 -- scripts/common.sh@367 -- # return 0 00:09:08.228 17:12:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.228 17:12:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:08.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.228 --rc genhtml_branch_coverage=1 00:09:08.228 --rc genhtml_function_coverage=1 00:09:08.228 --rc genhtml_legend=1 00:09:08.228 --rc geninfo_all_blocks=1 00:09:08.228 --rc geninfo_unexecuted_blocks=1 00:09:08.228 00:09:08.228 ' 00:09:08.228 17:12:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:08.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.228 --rc genhtml_branch_coverage=1 00:09:08.228 --rc genhtml_function_coverage=1 00:09:08.228 --rc genhtml_legend=1 00:09:08.228 --rc geninfo_all_blocks=1 00:09:08.228 --rc geninfo_unexecuted_blocks=1 00:09:08.228 00:09:08.228 ' 00:09:08.228 17:12:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:08.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.228 --rc genhtml_branch_coverage=1 00:09:08.228 --rc genhtml_function_coverage=1 00:09:08.228 --rc genhtml_legend=1 00:09:08.228 --rc geninfo_all_blocks=1 00:09:08.228 --rc geninfo_unexecuted_blocks=1 00:09:08.228 00:09:08.228 ' 00:09:08.228 17:12:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:08.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.228 --rc genhtml_branch_coverage=1 00:09:08.228 --rc genhtml_function_coverage=1 00:09:08.228 --rc genhtml_legend=1 00:09:08.228 --rc geninfo_all_blocks=1 00:09:08.228 --rc geninfo_unexecuted_blocks=1 00:09:08.228 00:09:08.228 ' 00:09:08.228 17:12:04 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.228 17:12:04 -- nvmf/common.sh@7 -- # uname -s 00:09:08.228 17:12:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.228 17:12:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.228 17:12:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.228 17:12:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.228 17:12:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.228 17:12:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.228 17:12:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.228 17:12:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.228 17:12:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.228 17:12:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.228 17:12:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:08.228 17:12:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:08.228 17:12:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.228 17:12:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.228 17:12:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.229 17:12:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:08.229 17:12:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.229 17:12:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.229 17:12:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.229 17:12:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.229 17:12:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.229 17:12:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.229 17:12:04 -- paths/export.sh@5 -- # export PATH 00:09:08.229 17:12:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.229 17:12:04 -- nvmf/common.sh@46 -- # : 0 00:09:08.229 17:12:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:08.229 17:12:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:08.229 17:12:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:08.229 17:12:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.229 17:12:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.229 17:12:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:08.229 17:12:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:08.229 17:12:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:08.229 17:12:04 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:08.229 17:12:04 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:08.229 17:12:04 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:08.229 17:12:04 -- target/discovery.sh@15 -- # hash nvme 00:09:08.229 17:12:04 -- target/discovery.sh@20 -- # nvmftestinit 00:09:08.229 17:12:04 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:08.229 17:12:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.229 17:12:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:08.229 17:12:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:08.229 17:12:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:08.229 17:12:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.229 17:12:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.229 17:12:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.229 17:12:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:08.229 17:12:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:08.229 17:12:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:08.229 17:12:04 -- common/autotest_common.sh@10 -- # set +x 00:09:16.343 17:12:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:16.343 17:12:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:16.343 17:12:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:16.343 17:12:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:16.343 17:12:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:16.343 17:12:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:16.343 17:12:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:16.343 17:12:11 -- nvmf/common.sh@294 -- # net_devs=() 00:09:16.343 17:12:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:16.343 17:12:11 -- nvmf/common.sh@295 -- # e810=() 00:09:16.343 17:12:11 -- nvmf/common.sh@295 -- # local -ga e810 00:09:16.343 17:12:11 -- nvmf/common.sh@296 -- # x722=() 00:09:16.343 17:12:11 -- nvmf/common.sh@296 -- # local -ga x722 00:09:16.343 17:12:11 -- nvmf/common.sh@297 -- # mlx=() 00:09:16.343 17:12:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:16.343 17:12:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.343 17:12:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.343 17:12:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.343 17:12:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.343 17:12:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.343 17:12:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.343 17:12:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.343 17:12:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.343 17:12:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.343 17:12:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.344 17:12:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.344 17:12:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:16.344 17:12:11 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:16.344 17:12:11 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:16.344 17:12:11 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:16.344 17:12:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:16.344 17:12:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:16.344 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:16.344 17:12:11 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:16.344 17:12:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:16.344 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:16.344 17:12:11 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:16.344 17:12:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:16.344 17:12:11 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.344 17:12:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:16.344 17:12:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.344 17:12:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:16.344 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:16.344 17:12:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.344 17:12:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.344 17:12:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:16.344 17:12:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.344 17:12:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:16.344 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:16.344 17:12:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.344 17:12:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:16.344 17:12:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:16.344 17:12:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:16.344 17:12:11 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:16.344 17:12:11 -- nvmf/common.sh@57 -- # uname 00:09:16.344 17:12:11 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:16.344 17:12:11 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:16.344 17:12:11 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:16.344 17:12:11 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:16.344 17:12:11 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:16.344 17:12:11 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:16.344 17:12:11 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:16.344 17:12:11 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:16.344 17:12:11 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:16.344 17:12:11 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:16.344 17:12:11 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:16.344 17:12:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:16.344 17:12:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:16.344 17:12:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:16.344 17:12:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:16.344 17:12:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:16.344 17:12:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:16.344 17:12:11 -- nvmf/common.sh@104 -- # continue 2 00:09:16.344 17:12:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:16.344 17:12:11 -- nvmf/common.sh@104 -- # continue 2 00:09:16.344 17:12:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:16.344 17:12:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:16.344 17:12:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:16.344 17:12:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:16.344 17:12:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:16.344 17:12:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:16.344 17:12:11 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:16.344 17:12:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:16.344 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:16.344 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:16.344 altname enp217s0f0np0 00:09:16.344 altname ens818f0np0 00:09:16.344 inet 192.168.100.8/24 scope global mlx_0_0 00:09:16.344 valid_lft forever preferred_lft forever 00:09:16.344 17:12:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:16.344 17:12:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:16.344 17:12:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:16.344 17:12:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:16.344 17:12:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:16.344 17:12:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:16.344 17:12:11 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:16.344 17:12:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:16.344 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:16.344 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:16.344 altname enp217s0f1np1 00:09:16.344 altname ens818f1np1 00:09:16.344 inet 192.168.100.9/24 scope global mlx_0_1 00:09:16.344 valid_lft forever preferred_lft forever 00:09:16.344 17:12:11 -- nvmf/common.sh@410 -- # return 0 00:09:16.344 17:12:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:16.344 17:12:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:16.344 17:12:11 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:16.344 17:12:11 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:16.344 17:12:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:16.344 17:12:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:16.344 17:12:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:16.344 17:12:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:16.344 17:12:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:16.344 17:12:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:16.344 17:12:11 -- nvmf/common.sh@104 -- # continue 2 00:09:16.344 17:12:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.344 17:12:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:16.344 17:12:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:16.344 17:12:11 -- nvmf/common.sh@104 -- # continue 2 00:09:16.344 17:12:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:16.344 17:12:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:16.344 17:12:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:16.344 17:12:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:16.344 17:12:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:16.344 17:12:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:16.344 17:12:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:16.344 17:12:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:16.344 17:12:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:16.344 17:12:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:16.344 17:12:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:16.344 17:12:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:16.344 17:12:11 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:16.344 192.168.100.9' 00:09:16.344 17:12:11 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:16.344 192.168.100.9' 00:09:16.344 17:12:11 -- nvmf/common.sh@445 -- # head -n 1 00:09:16.344 17:12:11 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:16.344 17:12:11 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:16.344 192.168.100.9' 00:09:16.344 17:12:11 -- nvmf/common.sh@446 -- # tail -n +2 00:09:16.344 17:12:11 -- nvmf/common.sh@446 -- # head -n 1 00:09:16.344 17:12:11 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:16.344 17:12:11 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:16.344 17:12:11 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:16.344 17:12:11 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:16.344 17:12:11 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:16.344 17:12:11 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:16.344 17:12:11 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:16.344 17:12:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:16.344 17:12:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:16.344 17:12:11 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:11 -- nvmf/common.sh@469 -- # nvmfpid=1224403 00:09:16.345 17:12:11 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:16.345 17:12:11 -- nvmf/common.sh@470 -- # waitforlisten 1224403 00:09:16.345 17:12:11 -- common/autotest_common.sh@829 -- # '[' -z 1224403 ']' 00:09:16.345 17:12:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.345 17:12:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:16.345 17:12:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.345 17:12:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:16.345 17:12:11 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 [2024-12-14 17:12:11.836079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:16.345 [2024-12-14 17:12:11.836133] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.345 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.345 [2024-12-14 17:12:11.905197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.345 [2024-12-14 17:12:11.944043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:16.345 [2024-12-14 17:12:11.944175] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.345 [2024-12-14 17:12:11.944186] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.345 [2024-12-14 17:12:11.944196] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.345 [2024-12-14 17:12:11.944249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.345 [2024-12-14 17:12:11.944270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.345 [2024-12-14 17:12:11.944357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.345 [2024-12-14 17:12:11.944356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.345 17:12:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.345 17:12:12 -- common/autotest_common.sh@862 -- # return 0 00:09:16.345 17:12:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:16.345 17:12:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.345 17:12:12 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 [2024-12-14 17:12:12.723451] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c1c0d0/0x1c205a0) succeed. 00:09:16.345 [2024-12-14 17:12:12.732619] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c1d670/0x1c61c40) succeed. 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@26 -- # seq 1 4 00:09:16.345 17:12:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:16.345 17:12:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 Null1 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 [2024-12-14 17:12:12.898292] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:16.345 17:12:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 Null2 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:16.345 17:12:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 Null3 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:16.345 17:12:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 Null4 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:09:16.345 17:12:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:12 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:13 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:16.345 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:13 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:09:16.345 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.345 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.345 17:12:13 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:09:16.604 00:09:16.604 Discovery Log Number of Records 6, Generation counter 6 00:09:16.604 =====Discovery Log Entry 0====== 00:09:16.604 trtype: rdma 00:09:16.604 adrfam: ipv4 00:09:16.604 subtype: current discovery subsystem 00:09:16.604 treq: not required 00:09:16.604 portid: 0 00:09:16.604 trsvcid: 4420 00:09:16.604 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:16.604 traddr: 192.168.100.8 00:09:16.604 eflags: explicit discovery connections, duplicate discovery information 00:09:16.604 rdma_prtype: not specified 00:09:16.604 rdma_qptype: connected 00:09:16.604 rdma_cms: rdma-cm 00:09:16.604 rdma_pkey: 0x0000 00:09:16.604 =====Discovery Log Entry 1====== 00:09:16.604 trtype: rdma 00:09:16.604 adrfam: ipv4 00:09:16.604 subtype: nvme subsystem 00:09:16.604 treq: not required 00:09:16.604 portid: 0 00:09:16.604 trsvcid: 4420 00:09:16.604 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:16.604 traddr: 192.168.100.8 00:09:16.604 eflags: none 00:09:16.604 rdma_prtype: not specified 00:09:16.604 rdma_qptype: connected 00:09:16.604 rdma_cms: rdma-cm 00:09:16.604 rdma_pkey: 0x0000 00:09:16.604 =====Discovery Log Entry 2====== 00:09:16.604 trtype: rdma 00:09:16.604 adrfam: ipv4 00:09:16.604 subtype: nvme subsystem 00:09:16.604 treq: not required 00:09:16.604 portid: 0 00:09:16.604 trsvcid: 4420 00:09:16.604 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:16.604 traddr: 192.168.100.8 00:09:16.604 eflags: none 00:09:16.604 rdma_prtype: not specified 00:09:16.604 rdma_qptype: connected 00:09:16.604 rdma_cms: rdma-cm 00:09:16.604 rdma_pkey: 0x0000 00:09:16.604 =====Discovery Log Entry 3====== 00:09:16.604 trtype: rdma 00:09:16.604 adrfam: ipv4 00:09:16.604 subtype: nvme subsystem 00:09:16.604 treq: not required 00:09:16.604 portid: 0 00:09:16.604 trsvcid: 4420 00:09:16.604 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:16.604 traddr: 192.168.100.8 00:09:16.604 eflags: none 00:09:16.604 rdma_prtype: not specified 00:09:16.604 rdma_qptype: connected 00:09:16.604 rdma_cms: rdma-cm 00:09:16.604 rdma_pkey: 0x0000 00:09:16.604 =====Discovery Log Entry 4====== 00:09:16.604 trtype: rdma 00:09:16.604 adrfam: ipv4 00:09:16.604 subtype: nvme subsystem 00:09:16.604 treq: not required 00:09:16.604 portid: 0 00:09:16.604 trsvcid: 4420 00:09:16.604 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:16.604 traddr: 192.168.100.8 00:09:16.604 eflags: none 00:09:16.604 rdma_prtype: not specified 00:09:16.604 rdma_qptype: connected 00:09:16.604 rdma_cms: rdma-cm 00:09:16.604 rdma_pkey: 0x0000 00:09:16.604 =====Discovery Log Entry 5====== 00:09:16.604 trtype: rdma 00:09:16.604 adrfam: ipv4 00:09:16.604 subtype: discovery subsystem referral 00:09:16.604 treq: not required 00:09:16.604 portid: 0 00:09:16.604 trsvcid: 4430 00:09:16.604 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:16.604 traddr: 192.168.100.8 00:09:16.604 eflags: none 00:09:16.604 rdma_prtype: unrecognized 00:09:16.604 rdma_qptype: unrecognized 00:09:16.604 rdma_cms: unrecognized 00:09:16.604 rdma_pkey: 0x0000 00:09:16.604 17:12:13 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:16.604 Perform nvmf subsystem discovery via RPC 00:09:16.604 17:12:13 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:16.604 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.604 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.604 [2024-12-14 17:12:13.130819] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:16.604 [ 00:09:16.604 { 00:09:16.604 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:16.604 "subtype": "Discovery", 00:09:16.604 "listen_addresses": [ 00:09:16.604 { 00:09:16.604 "transport": "RDMA", 00:09:16.604 "trtype": "RDMA", 00:09:16.604 "adrfam": "IPv4", 00:09:16.605 "traddr": "192.168.100.8", 00:09:16.605 "trsvcid": "4420" 00:09:16.605 } 00:09:16.605 ], 00:09:16.605 "allow_any_host": true, 00:09:16.605 "hosts": [] 00:09:16.605 }, 00:09:16.605 { 00:09:16.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.605 "subtype": "NVMe", 00:09:16.605 "listen_addresses": [ 00:09:16.605 { 00:09:16.605 "transport": "RDMA", 00:09:16.605 "trtype": "RDMA", 00:09:16.605 "adrfam": "IPv4", 00:09:16.605 "traddr": "192.168.100.8", 00:09:16.605 "trsvcid": "4420" 00:09:16.605 } 00:09:16.605 ], 00:09:16.605 "allow_any_host": true, 00:09:16.605 "hosts": [], 00:09:16.605 "serial_number": "SPDK00000000000001", 00:09:16.605 "model_number": "SPDK bdev Controller", 00:09:16.605 "max_namespaces": 32, 00:09:16.605 "min_cntlid": 1, 00:09:16.605 "max_cntlid": 65519, 00:09:16.605 "namespaces": [ 00:09:16.605 { 00:09:16.605 "nsid": 1, 00:09:16.605 "bdev_name": "Null1", 00:09:16.605 "name": "Null1", 00:09:16.605 "nguid": "420DD28A655B4037AFDB9F02789B064D", 00:09:16.605 "uuid": "420dd28a-655b-4037-afdb-9f02789b064d" 00:09:16.605 } 00:09:16.605 ] 00:09:16.605 }, 00:09:16.605 { 00:09:16.605 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:16.605 "subtype": "NVMe", 00:09:16.605 "listen_addresses": [ 00:09:16.605 { 00:09:16.605 "transport": "RDMA", 00:09:16.605 "trtype": "RDMA", 00:09:16.605 "adrfam": "IPv4", 00:09:16.605 "traddr": "192.168.100.8", 00:09:16.605 "trsvcid": "4420" 00:09:16.605 } 00:09:16.605 ], 00:09:16.605 "allow_any_host": true, 00:09:16.605 "hosts": [], 00:09:16.605 "serial_number": "SPDK00000000000002", 00:09:16.605 "model_number": "SPDK bdev Controller", 00:09:16.605 "max_namespaces": 32, 00:09:16.605 "min_cntlid": 1, 00:09:16.605 "max_cntlid": 65519, 00:09:16.605 "namespaces": [ 00:09:16.605 { 00:09:16.605 "nsid": 1, 00:09:16.605 "bdev_name": "Null2", 00:09:16.605 "name": "Null2", 00:09:16.605 "nguid": "663561EA58A343E99B095BE504573AD2", 00:09:16.605 "uuid": "663561ea-58a3-43e9-9b09-5be504573ad2" 00:09:16.605 } 00:09:16.605 ] 00:09:16.605 }, 00:09:16.605 { 00:09:16.605 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:16.605 "subtype": "NVMe", 00:09:16.605 "listen_addresses": [ 00:09:16.605 { 00:09:16.605 "transport": "RDMA", 00:09:16.605 "trtype": "RDMA", 00:09:16.605 "adrfam": "IPv4", 00:09:16.605 "traddr": "192.168.100.8", 00:09:16.605 "trsvcid": "4420" 00:09:16.605 } 00:09:16.605 ], 00:09:16.605 "allow_any_host": true, 00:09:16.605 "hosts": [], 00:09:16.605 "serial_number": "SPDK00000000000003", 00:09:16.605 "model_number": "SPDK bdev Controller", 00:09:16.605 "max_namespaces": 32, 00:09:16.605 "min_cntlid": 1, 00:09:16.605 "max_cntlid": 65519, 00:09:16.605 "namespaces": [ 00:09:16.605 { 00:09:16.605 "nsid": 1, 00:09:16.605 "bdev_name": "Null3", 00:09:16.605 "name": "Null3", 00:09:16.605 "nguid": "15F81C1784434164A0A91F24496BFD69", 00:09:16.605 "uuid": "15f81c17-8443-4164-a0a9-1f24496bfd69" 00:09:16.605 } 00:09:16.605 ] 00:09:16.605 }, 00:09:16.605 { 00:09:16.605 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:16.605 "subtype": "NVMe", 00:09:16.605 "listen_addresses": [ 00:09:16.605 { 00:09:16.605 "transport": "RDMA", 00:09:16.605 "trtype": "RDMA", 00:09:16.605 "adrfam": "IPv4", 00:09:16.605 "traddr": "192.168.100.8", 00:09:16.605 "trsvcid": "4420" 00:09:16.605 } 00:09:16.605 ], 00:09:16.605 "allow_any_host": true, 00:09:16.605 "hosts": [], 00:09:16.605 "serial_number": "SPDK00000000000004", 00:09:16.605 "model_number": "SPDK bdev Controller", 00:09:16.605 "max_namespaces": 32, 00:09:16.605 "min_cntlid": 1, 00:09:16.605 "max_cntlid": 65519, 00:09:16.605 "namespaces": [ 00:09:16.605 { 00:09:16.605 "nsid": 1, 00:09:16.605 "bdev_name": "Null4", 00:09:16.605 "name": "Null4", 00:09:16.605 "nguid": "2D686F9991E2400986A8EF5FC28E7462", 00:09:16.605 "uuid": "2d686f99-91e2-4009-86a8-ef5fc28e7462" 00:09:16.605 } 00:09:16.605 ] 00:09:16.605 } 00:09:16.605 ] 00:09:16.605 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.605 17:12:13 -- target/discovery.sh@42 -- # seq 1 4 00:09:16.605 17:12:13 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:16.605 17:12:13 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.605 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.605 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.605 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.605 17:12:13 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:16.605 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.605 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.605 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.605 17:12:13 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:16.605 17:12:13 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:16.605 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.605 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.605 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.605 17:12:13 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:16.605 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.605 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.605 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.605 17:12:13 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:16.605 17:12:13 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:16.605 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.605 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.605 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.605 17:12:13 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:16.605 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.605 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.605 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.605 17:12:13 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:16.605 17:12:13 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:16.605 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.605 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.605 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.605 17:12:13 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:16.605 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.605 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.605 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.605 17:12:13 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:09:16.605 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.605 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.605 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.605 17:12:13 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:16.605 17:12:13 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:16.605 17:12:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.605 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:16.605 17:12:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.870 17:12:13 -- target/discovery.sh@49 -- # check_bdevs= 00:09:16.870 17:12:13 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:16.870 17:12:13 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:16.870 17:12:13 -- target/discovery.sh@57 -- # nvmftestfini 00:09:16.870 17:12:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:16.870 17:12:13 -- nvmf/common.sh@116 -- # sync 00:09:16.870 17:12:13 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:09:16.870 17:12:13 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:09:16.870 17:12:13 -- nvmf/common.sh@119 -- # set +e 00:09:16.870 17:12:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:16.870 17:12:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:09:16.870 rmmod nvme_rdma 00:09:16.870 rmmod nvme_fabrics 00:09:16.870 17:12:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:16.870 17:12:13 -- nvmf/common.sh@123 -- # set -e 00:09:16.870 17:12:13 -- nvmf/common.sh@124 -- # return 0 00:09:16.870 17:12:13 -- nvmf/common.sh@477 -- # '[' -n 1224403 ']' 00:09:16.870 17:12:13 -- nvmf/common.sh@478 -- # killprocess 1224403 00:09:16.870 17:12:13 -- common/autotest_common.sh@936 -- # '[' -z 1224403 ']' 00:09:16.870 17:12:13 -- common/autotest_common.sh@940 -- # kill -0 1224403 00:09:16.870 17:12:13 -- common/autotest_common.sh@941 -- # uname 00:09:16.870 17:12:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:16.870 17:12:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1224403 00:09:16.870 17:12:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:16.870 17:12:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:16.870 17:12:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1224403' 00:09:16.870 killing process with pid 1224403 00:09:16.870 17:12:13 -- common/autotest_common.sh@955 -- # kill 1224403 00:09:16.870 [2024-12-14 17:12:13.408295] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:09:16.870 17:12:13 -- common/autotest_common.sh@960 -- # wait 1224403 00:09:17.129 17:12:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:17.129 17:12:13 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:09:17.129 00:09:17.129 real 0m9.162s 00:09:17.129 user 0m8.898s 00:09:17.129 sys 0m5.913s 00:09:17.129 17:12:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.129 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:17.129 ************************************ 00:09:17.129 END TEST nvmf_discovery 00:09:17.129 ************************************ 00:09:17.129 17:12:13 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:17.129 17:12:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:17.129 17:12:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.129 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:17.129 ************************************ 00:09:17.129 START TEST nvmf_referrals 00:09:17.129 ************************************ 00:09:17.129 17:12:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:17.129 * Looking for test storage... 00:09:17.129 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:17.129 17:12:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:17.129 17:12:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:17.129 17:12:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:17.388 17:12:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:17.388 17:12:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:17.388 17:12:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:17.388 17:12:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:17.388 17:12:13 -- scripts/common.sh@335 -- # IFS=.-: 00:09:17.388 17:12:13 -- scripts/common.sh@335 -- # read -ra ver1 00:09:17.388 17:12:13 -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.388 17:12:13 -- scripts/common.sh@336 -- # read -ra ver2 00:09:17.388 17:12:13 -- scripts/common.sh@337 -- # local 'op=<' 00:09:17.388 17:12:13 -- scripts/common.sh@339 -- # ver1_l=2 00:09:17.388 17:12:13 -- scripts/common.sh@340 -- # ver2_l=1 00:09:17.388 17:12:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:17.388 17:12:13 -- scripts/common.sh@343 -- # case "$op" in 00:09:17.388 17:12:13 -- scripts/common.sh@344 -- # : 1 00:09:17.388 17:12:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:17.388 17:12:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.388 17:12:13 -- scripts/common.sh@364 -- # decimal 1 00:09:17.388 17:12:13 -- scripts/common.sh@352 -- # local d=1 00:09:17.388 17:12:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.388 17:12:13 -- scripts/common.sh@354 -- # echo 1 00:09:17.388 17:12:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:17.388 17:12:13 -- scripts/common.sh@365 -- # decimal 2 00:09:17.388 17:12:13 -- scripts/common.sh@352 -- # local d=2 00:09:17.388 17:12:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.388 17:12:13 -- scripts/common.sh@354 -- # echo 2 00:09:17.388 17:12:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:17.388 17:12:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:17.388 17:12:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:17.388 17:12:13 -- scripts/common.sh@367 -- # return 0 00:09:17.388 17:12:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.388 17:12:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:17.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.388 --rc genhtml_branch_coverage=1 00:09:17.388 --rc genhtml_function_coverage=1 00:09:17.388 --rc genhtml_legend=1 00:09:17.388 --rc geninfo_all_blocks=1 00:09:17.388 --rc geninfo_unexecuted_blocks=1 00:09:17.388 00:09:17.388 ' 00:09:17.388 17:12:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:17.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.388 --rc genhtml_branch_coverage=1 00:09:17.388 --rc genhtml_function_coverage=1 00:09:17.388 --rc genhtml_legend=1 00:09:17.388 --rc geninfo_all_blocks=1 00:09:17.388 --rc geninfo_unexecuted_blocks=1 00:09:17.388 00:09:17.388 ' 00:09:17.388 17:12:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:17.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.388 --rc genhtml_branch_coverage=1 00:09:17.388 --rc genhtml_function_coverage=1 00:09:17.388 --rc genhtml_legend=1 00:09:17.388 --rc geninfo_all_blocks=1 00:09:17.388 --rc geninfo_unexecuted_blocks=1 00:09:17.388 00:09:17.388 ' 00:09:17.388 17:12:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:17.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.388 --rc genhtml_branch_coverage=1 00:09:17.388 --rc genhtml_function_coverage=1 00:09:17.388 --rc genhtml_legend=1 00:09:17.388 --rc geninfo_all_blocks=1 00:09:17.388 --rc geninfo_unexecuted_blocks=1 00:09:17.388 00:09:17.388 ' 00:09:17.388 17:12:13 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.388 17:12:13 -- nvmf/common.sh@7 -- # uname -s 00:09:17.388 17:12:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.388 17:12:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.388 17:12:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.388 17:12:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.388 17:12:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.388 17:12:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.388 17:12:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.388 17:12:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.388 17:12:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.388 17:12:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.388 17:12:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:17.388 17:12:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:17.388 17:12:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.388 17:12:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.388 17:12:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.388 17:12:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:17.388 17:12:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.388 17:12:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.388 17:12:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.389 17:12:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.389 17:12:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.389 17:12:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.389 17:12:13 -- paths/export.sh@5 -- # export PATH 00:09:17.389 17:12:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.389 17:12:13 -- nvmf/common.sh@46 -- # : 0 00:09:17.389 17:12:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:17.389 17:12:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:17.389 17:12:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:17.389 17:12:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.389 17:12:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.389 17:12:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:17.389 17:12:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:17.389 17:12:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:17.389 17:12:13 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:17.389 17:12:13 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:17.389 17:12:13 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:17.389 17:12:13 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:17.389 17:12:13 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:17.389 17:12:13 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:17.389 17:12:13 -- target/referrals.sh@37 -- # nvmftestinit 00:09:17.389 17:12:13 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:17.389 17:12:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.389 17:12:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:17.389 17:12:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:17.389 17:12:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:17.389 17:12:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.389 17:12:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.389 17:12:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.389 17:12:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:17.389 17:12:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:17.389 17:12:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:17.389 17:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:25.503 17:12:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:25.503 17:12:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:25.503 17:12:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:25.503 17:12:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:25.503 17:12:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:25.503 17:12:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:25.503 17:12:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:25.503 17:12:20 -- nvmf/common.sh@294 -- # net_devs=() 00:09:25.503 17:12:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:25.503 17:12:20 -- nvmf/common.sh@295 -- # e810=() 00:09:25.503 17:12:20 -- nvmf/common.sh@295 -- # local -ga e810 00:09:25.503 17:12:20 -- nvmf/common.sh@296 -- # x722=() 00:09:25.503 17:12:20 -- nvmf/common.sh@296 -- # local -ga x722 00:09:25.503 17:12:20 -- nvmf/common.sh@297 -- # mlx=() 00:09:25.503 17:12:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:25.503 17:12:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.503 17:12:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.503 17:12:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.503 17:12:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.503 17:12:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.503 17:12:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.503 17:12:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.503 17:12:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.503 17:12:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.503 17:12:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.503 17:12:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.503 17:12:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:25.503 17:12:20 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:25.503 17:12:20 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:25.503 17:12:20 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:25.503 17:12:20 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:25.503 17:12:20 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:25.503 17:12:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:25.503 17:12:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:25.503 17:12:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:25.503 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:25.503 17:12:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:25.503 17:12:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:25.504 17:12:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:25.504 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:25.504 17:12:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:25.504 17:12:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:25.504 17:12:20 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.504 17:12:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:25.504 17:12:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.504 17:12:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:25.504 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:25.504 17:12:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.504 17:12:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.504 17:12:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:25.504 17:12:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.504 17:12:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:25.504 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:25.504 17:12:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.504 17:12:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:25.504 17:12:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:25.504 17:12:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:25.504 17:12:20 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:25.504 17:12:20 -- nvmf/common.sh@57 -- # uname 00:09:25.504 17:12:20 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:25.504 17:12:20 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:25.504 17:12:20 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:25.504 17:12:20 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:25.504 17:12:20 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:25.504 17:12:20 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:25.504 17:12:20 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:25.504 17:12:20 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:25.504 17:12:20 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:25.504 17:12:20 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:25.504 17:12:20 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:25.504 17:12:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:25.504 17:12:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:25.504 17:12:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:25.504 17:12:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:25.504 17:12:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:25.504 17:12:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:25.504 17:12:20 -- nvmf/common.sh@104 -- # continue 2 00:09:25.504 17:12:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:25.504 17:12:20 -- nvmf/common.sh@104 -- # continue 2 00:09:25.504 17:12:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:25.504 17:12:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:25.504 17:12:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:25.504 17:12:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:25.504 17:12:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:25.504 17:12:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:25.504 17:12:20 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:25.504 17:12:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:25.504 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:25.504 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:25.504 altname enp217s0f0np0 00:09:25.504 altname ens818f0np0 00:09:25.504 inet 192.168.100.8/24 scope global mlx_0_0 00:09:25.504 valid_lft forever preferred_lft forever 00:09:25.504 17:12:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:25.504 17:12:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:25.504 17:12:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:25.504 17:12:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:25.504 17:12:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:25.504 17:12:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:25.504 17:12:20 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:25.504 17:12:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:25.504 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:25.504 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:25.504 altname enp217s0f1np1 00:09:25.504 altname ens818f1np1 00:09:25.504 inet 192.168.100.9/24 scope global mlx_0_1 00:09:25.504 valid_lft forever preferred_lft forever 00:09:25.504 17:12:20 -- nvmf/common.sh@410 -- # return 0 00:09:25.504 17:12:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:25.504 17:12:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:25.504 17:12:20 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:25.504 17:12:20 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:25.504 17:12:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:25.504 17:12:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:25.504 17:12:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:25.504 17:12:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:25.504 17:12:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:25.504 17:12:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:25.504 17:12:20 -- nvmf/common.sh@104 -- # continue 2 00:09:25.504 17:12:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.504 17:12:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:25.504 17:12:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:25.504 17:12:20 -- nvmf/common.sh@104 -- # continue 2 00:09:25.504 17:12:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:25.504 17:12:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:25.504 17:12:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:25.504 17:12:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:25.504 17:12:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:25.504 17:12:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:25.504 17:12:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:25.504 17:12:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:25.504 17:12:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:25.504 17:12:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:25.504 17:12:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:25.504 17:12:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:25.504 17:12:20 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:25.504 192.168.100.9' 00:09:25.504 17:12:20 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:25.504 192.168.100.9' 00:09:25.504 17:12:20 -- nvmf/common.sh@445 -- # head -n 1 00:09:25.504 17:12:20 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:25.504 17:12:20 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:25.504 192.168.100.9' 00:09:25.504 17:12:20 -- nvmf/common.sh@446 -- # tail -n +2 00:09:25.504 17:12:20 -- nvmf/common.sh@446 -- # head -n 1 00:09:25.504 17:12:20 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:25.504 17:12:20 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:25.504 17:12:20 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:25.504 17:12:20 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:25.504 17:12:20 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:25.504 17:12:20 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:25.504 17:12:20 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:25.504 17:12:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:25.504 17:12:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.504 17:12:20 -- common/autotest_common.sh@10 -- # set +x 00:09:25.504 17:12:20 -- nvmf/common.sh@469 -- # nvmfpid=1228154 00:09:25.504 17:12:20 -- nvmf/common.sh@470 -- # waitforlisten 1228154 00:09:25.504 17:12:20 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:25.504 17:12:20 -- common/autotest_common.sh@829 -- # '[' -z 1228154 ']' 00:09:25.504 17:12:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.504 17:12:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.504 17:12:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.504 17:12:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.504 17:12:20 -- common/autotest_common.sh@10 -- # set +x 00:09:25.504 [2024-12-14 17:12:21.030103] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:25.504 [2024-12-14 17:12:21.030153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.505 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.505 [2024-12-14 17:12:21.099718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.505 [2024-12-14 17:12:21.137282] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:25.505 [2024-12-14 17:12:21.137389] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.505 [2024-12-14 17:12:21.137399] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.505 [2024-12-14 17:12:21.137408] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.505 [2024-12-14 17:12:21.137457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.505 [2024-12-14 17:12:21.137570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.505 [2024-12-14 17:12:21.137591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.505 [2024-12-14 17:12:21.137593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.505 17:12:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.505 17:12:21 -- common/autotest_common.sh@862 -- # return 0 00:09:25.505 17:12:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:25.505 17:12:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.505 17:12:21 -- common/autotest_common.sh@10 -- # set +x 00:09:25.505 17:12:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.505 17:12:21 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:25.505 17:12:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.505 17:12:21 -- common/autotest_common.sh@10 -- # set +x 00:09:25.505 [2024-12-14 17:12:21.913649] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x233a0d0/0x233e5a0) succeed. 00:09:25.505 [2024-12-14 17:12:21.922829] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x233b670/0x237fc40) succeed. 00:09:25.505 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.505 17:12:22 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:09:25.505 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.505 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:25.505 [2024-12-14 17:12:22.041660] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:25.505 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.505 17:12:22 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:25.505 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.505 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:25.505 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.505 17:12:22 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:25.505 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.505 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:25.505 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.505 17:12:22 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:25.505 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.505 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:25.505 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.505 17:12:22 -- target/referrals.sh@48 -- # jq length 00:09:25.505 17:12:22 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:25.505 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.505 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:25.505 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.505 17:12:22 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:25.505 17:12:22 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:25.505 17:12:22 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:25.505 17:12:22 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:25.505 17:12:22 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:25.505 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.505 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:25.505 17:12:22 -- target/referrals.sh@21 -- # sort 00:09:25.505 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.505 17:12:22 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:25.505 17:12:22 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:25.505 17:12:22 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:25.505 17:12:22 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:25.505 17:12:22 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:25.505 17:12:22 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:25.505 17:12:22 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:25.505 17:12:22 -- target/referrals.sh@26 -- # sort 00:09:25.763 17:12:22 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:25.763 17:12:22 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:25.763 17:12:22 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:25.763 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.763 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:25.763 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.763 17:12:22 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:25.763 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.764 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:25.764 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.764 17:12:22 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:25.764 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.764 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:25.764 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.764 17:12:22 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:25.764 17:12:22 -- target/referrals.sh@56 -- # jq length 00:09:25.764 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.764 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:25.764 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.764 17:12:22 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:25.764 17:12:22 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:25.764 17:12:22 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:25.764 17:12:22 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:25.764 17:12:22 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:25.764 17:12:22 -- target/referrals.sh@26 -- # sort 00:09:25.764 17:12:22 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:25.764 17:12:22 -- target/referrals.sh@26 -- # echo 00:09:25.764 17:12:22 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:25.764 17:12:22 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:25.764 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.764 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:25.764 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.764 17:12:22 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:25.764 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.764 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:26.022 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.022 17:12:22 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:26.022 17:12:22 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:26.022 17:12:22 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:26.022 17:12:22 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:26.022 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.022 17:12:22 -- target/referrals.sh@21 -- # sort 00:09:26.022 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:26.022 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.022 17:12:22 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:26.022 17:12:22 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:26.022 17:12:22 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:26.022 17:12:22 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:26.022 17:12:22 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:26.022 17:12:22 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:26.022 17:12:22 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:26.022 17:12:22 -- target/referrals.sh@26 -- # sort 00:09:26.022 17:12:22 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:26.022 17:12:22 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:26.022 17:12:22 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:26.022 17:12:22 -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:26.022 17:12:22 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:26.022 17:12:22 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:26.022 17:12:22 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:26.280 17:12:22 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:26.280 17:12:22 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:26.280 17:12:22 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:26.280 17:12:22 -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:26.280 17:12:22 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:26.280 17:12:22 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:26.280 17:12:22 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:26.280 17:12:22 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:26.280 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.280 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:26.280 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.280 17:12:22 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:26.280 17:12:22 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:26.280 17:12:22 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:26.280 17:12:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.280 17:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:26.280 17:12:22 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:26.280 17:12:22 -- target/referrals.sh@21 -- # sort 00:09:26.280 17:12:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.280 17:12:22 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:26.280 17:12:22 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:26.280 17:12:22 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:26.280 17:12:22 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:26.280 17:12:22 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:26.280 17:12:22 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:26.280 17:12:22 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:26.280 17:12:22 -- target/referrals.sh@26 -- # sort 00:09:26.538 17:12:22 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:26.538 17:12:22 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:26.538 17:12:22 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:26.538 17:12:22 -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:26.538 17:12:22 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:26.538 17:12:22 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:26.538 17:12:22 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:26.538 17:12:23 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:26.538 17:12:23 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:26.538 17:12:23 -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:26.538 17:12:23 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:26.538 17:12:23 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:26.538 17:12:23 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:26.538 17:12:23 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:26.538 17:12:23 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:26.538 17:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.538 17:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:26.538 17:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.538 17:12:23 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:26.538 17:12:23 -- target/referrals.sh@82 -- # jq length 00:09:26.538 17:12:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.538 17:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:26.538 17:12:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.795 17:12:23 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:26.795 17:12:23 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:26.795 17:12:23 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:26.795 17:12:23 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:26.795 17:12:23 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:26.795 17:12:23 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:26.795 17:12:23 -- target/referrals.sh@26 -- # sort 00:09:26.795 17:12:23 -- target/referrals.sh@26 -- # echo 00:09:26.795 17:12:23 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:26.795 17:12:23 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:26.795 17:12:23 -- target/referrals.sh@86 -- # nvmftestfini 00:09:26.795 17:12:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:26.795 17:12:23 -- nvmf/common.sh@116 -- # sync 00:09:26.795 17:12:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:09:26.795 17:12:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:09:26.795 17:12:23 -- nvmf/common.sh@119 -- # set +e 00:09:26.795 17:12:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:26.795 17:12:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:09:26.795 rmmod nvme_rdma 00:09:26.795 rmmod nvme_fabrics 00:09:26.795 17:12:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:26.795 17:12:23 -- nvmf/common.sh@123 -- # set -e 00:09:26.795 17:12:23 -- nvmf/common.sh@124 -- # return 0 00:09:26.795 17:12:23 -- nvmf/common.sh@477 -- # '[' -n 1228154 ']' 00:09:26.795 17:12:23 -- nvmf/common.sh@478 -- # killprocess 1228154 00:09:26.796 17:12:23 -- common/autotest_common.sh@936 -- # '[' -z 1228154 ']' 00:09:26.796 17:12:23 -- common/autotest_common.sh@940 -- # kill -0 1228154 00:09:26.796 17:12:23 -- common/autotest_common.sh@941 -- # uname 00:09:26.796 17:12:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:26.796 17:12:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1228154 00:09:26.796 17:12:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:26.796 17:12:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:26.796 17:12:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1228154' 00:09:26.796 killing process with pid 1228154 00:09:26.796 17:12:23 -- common/autotest_common.sh@955 -- # kill 1228154 00:09:26.796 17:12:23 -- common/autotest_common.sh@960 -- # wait 1228154 00:09:27.054 17:12:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:27.054 17:12:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:09:27.054 00:09:27.054 real 0m10.010s 00:09:27.054 user 0m13.027s 00:09:27.054 sys 0m6.251s 00:09:27.054 17:12:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:27.054 17:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:27.054 ************************************ 00:09:27.054 END TEST nvmf_referrals 00:09:27.054 ************************************ 00:09:27.314 17:12:23 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:27.314 17:12:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:27.314 17:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.314 17:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:27.314 ************************************ 00:09:27.314 START TEST nvmf_connect_disconnect 00:09:27.314 ************************************ 00:09:27.314 17:12:23 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:27.314 * Looking for test storage... 00:09:27.314 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:27.314 17:12:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:27.314 17:12:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:27.314 17:12:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:27.314 17:12:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:27.314 17:12:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:27.314 17:12:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:27.314 17:12:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:27.314 17:12:23 -- scripts/common.sh@335 -- # IFS=.-: 00:09:27.314 17:12:23 -- scripts/common.sh@335 -- # read -ra ver1 00:09:27.314 17:12:23 -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.314 17:12:23 -- scripts/common.sh@336 -- # read -ra ver2 00:09:27.314 17:12:23 -- scripts/common.sh@337 -- # local 'op=<' 00:09:27.314 17:12:23 -- scripts/common.sh@339 -- # ver1_l=2 00:09:27.314 17:12:23 -- scripts/common.sh@340 -- # ver2_l=1 00:09:27.314 17:12:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:27.314 17:12:23 -- scripts/common.sh@343 -- # case "$op" in 00:09:27.314 17:12:23 -- scripts/common.sh@344 -- # : 1 00:09:27.314 17:12:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:27.314 17:12:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.314 17:12:23 -- scripts/common.sh@364 -- # decimal 1 00:09:27.314 17:12:23 -- scripts/common.sh@352 -- # local d=1 00:09:27.314 17:12:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.314 17:12:23 -- scripts/common.sh@354 -- # echo 1 00:09:27.314 17:12:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:27.314 17:12:23 -- scripts/common.sh@365 -- # decimal 2 00:09:27.314 17:12:23 -- scripts/common.sh@352 -- # local d=2 00:09:27.314 17:12:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.314 17:12:23 -- scripts/common.sh@354 -- # echo 2 00:09:27.314 17:12:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:27.314 17:12:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:27.314 17:12:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:27.314 17:12:23 -- scripts/common.sh@367 -- # return 0 00:09:27.314 17:12:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.314 17:12:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:27.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.314 --rc genhtml_branch_coverage=1 00:09:27.314 --rc genhtml_function_coverage=1 00:09:27.314 --rc genhtml_legend=1 00:09:27.314 --rc geninfo_all_blocks=1 00:09:27.314 --rc geninfo_unexecuted_blocks=1 00:09:27.314 00:09:27.314 ' 00:09:27.314 17:12:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:27.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.314 --rc genhtml_branch_coverage=1 00:09:27.314 --rc genhtml_function_coverage=1 00:09:27.314 --rc genhtml_legend=1 00:09:27.314 --rc geninfo_all_blocks=1 00:09:27.314 --rc geninfo_unexecuted_blocks=1 00:09:27.314 00:09:27.314 ' 00:09:27.314 17:12:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:27.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.314 --rc genhtml_branch_coverage=1 00:09:27.314 --rc genhtml_function_coverage=1 00:09:27.314 --rc genhtml_legend=1 00:09:27.314 --rc geninfo_all_blocks=1 00:09:27.314 --rc geninfo_unexecuted_blocks=1 00:09:27.314 00:09:27.314 ' 00:09:27.314 17:12:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:27.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.314 --rc genhtml_branch_coverage=1 00:09:27.314 --rc genhtml_function_coverage=1 00:09:27.314 --rc genhtml_legend=1 00:09:27.314 --rc geninfo_all_blocks=1 00:09:27.314 --rc geninfo_unexecuted_blocks=1 00:09:27.314 00:09:27.314 ' 00:09:27.314 17:12:23 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.314 17:12:23 -- nvmf/common.sh@7 -- # uname -s 00:09:27.314 17:12:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.314 17:12:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.314 17:12:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.314 17:12:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.314 17:12:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.314 17:12:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.314 17:12:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.314 17:12:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.314 17:12:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.314 17:12:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.314 17:12:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:27.314 17:12:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:27.314 17:12:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.314 17:12:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.314 17:12:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.314 17:12:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:27.314 17:12:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.314 17:12:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.314 17:12:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.314 17:12:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.314 17:12:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.314 17:12:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.314 17:12:23 -- paths/export.sh@5 -- # export PATH 00:09:27.314 17:12:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.314 17:12:23 -- nvmf/common.sh@46 -- # : 0 00:09:27.314 17:12:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:27.314 17:12:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:27.314 17:12:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:27.314 17:12:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.314 17:12:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.314 17:12:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:27.314 17:12:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:27.314 17:12:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:27.314 17:12:23 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.314 17:12:23 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.314 17:12:23 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:27.314 17:12:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:27.314 17:12:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.314 17:12:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:27.314 17:12:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:27.314 17:12:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:27.314 17:12:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.314 17:12:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.314 17:12:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.314 17:12:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:27.314 17:12:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:27.314 17:12:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:27.314 17:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:33.873 17:12:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:33.873 17:12:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:33.873 17:12:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:33.873 17:12:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:33.873 17:12:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:33.873 17:12:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:33.873 17:12:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:33.873 17:12:30 -- nvmf/common.sh@294 -- # net_devs=() 00:09:33.873 17:12:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:33.873 17:12:30 -- nvmf/common.sh@295 -- # e810=() 00:09:33.873 17:12:30 -- nvmf/common.sh@295 -- # local -ga e810 00:09:33.873 17:12:30 -- nvmf/common.sh@296 -- # x722=() 00:09:33.873 17:12:30 -- nvmf/common.sh@296 -- # local -ga x722 00:09:33.873 17:12:30 -- nvmf/common.sh@297 -- # mlx=() 00:09:33.873 17:12:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:33.873 17:12:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.873 17:12:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.873 17:12:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.873 17:12:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.873 17:12:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.132 17:12:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.132 17:12:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.132 17:12:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.132 17:12:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.132 17:12:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.132 17:12:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.132 17:12:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:34.132 17:12:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:34.132 17:12:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:34.132 17:12:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:34.132 17:12:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:34.132 17:12:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:34.132 17:12:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:34.132 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:34.132 17:12:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.132 17:12:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:34.132 17:12:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:34.132 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:34.132 17:12:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.132 17:12:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:34.132 17:12:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:34.132 17:12:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.132 17:12:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:34.132 17:12:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.132 17:12:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:34.132 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:34.132 17:12:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.132 17:12:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:34.132 17:12:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.132 17:12:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:34.132 17:12:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.132 17:12:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:34.132 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:34.132 17:12:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.132 17:12:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:34.132 17:12:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:34.132 17:12:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:34.132 17:12:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:34.132 17:12:30 -- nvmf/common.sh@57 -- # uname 00:09:34.132 17:12:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:34.132 17:12:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:34.132 17:12:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:34.132 17:12:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:34.132 17:12:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:34.132 17:12:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:34.132 17:12:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:34.132 17:12:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:34.132 17:12:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:34.132 17:12:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:34.132 17:12:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:34.132 17:12:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.132 17:12:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:34.132 17:12:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:34.132 17:12:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.132 17:12:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:34.132 17:12:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:34.132 17:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.132 17:12:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:34.132 17:12:30 -- nvmf/common.sh@104 -- # continue 2 00:09:34.132 17:12:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:34.132 17:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.132 17:12:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.132 17:12:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:34.132 17:12:30 -- nvmf/common.sh@104 -- # continue 2 00:09:34.132 17:12:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:34.132 17:12:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:34.132 17:12:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:34.132 17:12:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:34.132 17:12:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:34.132 17:12:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:34.132 17:12:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:34.132 17:12:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:34.132 17:12:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:34.133 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.133 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:34.133 altname enp217s0f0np0 00:09:34.133 altname ens818f0np0 00:09:34.133 inet 192.168.100.8/24 scope global mlx_0_0 00:09:34.133 valid_lft forever preferred_lft forever 00:09:34.133 17:12:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:34.133 17:12:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:34.133 17:12:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:34.133 17:12:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:34.133 17:12:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:34.133 17:12:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:34.133 17:12:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:34.133 17:12:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:34.133 17:12:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:34.133 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.133 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:34.133 altname enp217s0f1np1 00:09:34.133 altname ens818f1np1 00:09:34.133 inet 192.168.100.9/24 scope global mlx_0_1 00:09:34.133 valid_lft forever preferred_lft forever 00:09:34.133 17:12:30 -- nvmf/common.sh@410 -- # return 0 00:09:34.133 17:12:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:34.133 17:12:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:34.133 17:12:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:34.133 17:12:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:34.133 17:12:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:34.133 17:12:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.133 17:12:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:34.133 17:12:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:34.133 17:12:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.133 17:12:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:34.133 17:12:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:34.133 17:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.133 17:12:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.133 17:12:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:34.133 17:12:30 -- nvmf/common.sh@104 -- # continue 2 00:09:34.133 17:12:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:34.133 17:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.133 17:12:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.133 17:12:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.133 17:12:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.133 17:12:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:34.133 17:12:30 -- nvmf/common.sh@104 -- # continue 2 00:09:34.133 17:12:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:34.133 17:12:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:34.133 17:12:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:34.133 17:12:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:34.133 17:12:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:34.133 17:12:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:34.133 17:12:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:34.133 17:12:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:34.133 17:12:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:34.133 17:12:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:34.133 17:12:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:34.133 17:12:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:34.133 17:12:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:34.133 192.168.100.9' 00:09:34.133 17:12:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:34.133 192.168.100.9' 00:09:34.133 17:12:30 -- nvmf/common.sh@445 -- # head -n 1 00:09:34.133 17:12:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:34.133 17:12:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:34.133 192.168.100.9' 00:09:34.133 17:12:30 -- nvmf/common.sh@446 -- # tail -n +2 00:09:34.133 17:12:30 -- nvmf/common.sh@446 -- # head -n 1 00:09:34.133 17:12:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:34.133 17:12:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:34.133 17:12:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:34.133 17:12:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:34.133 17:12:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:34.133 17:12:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:34.391 17:12:30 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:34.391 17:12:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:34.391 17:12:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:34.391 17:12:30 -- common/autotest_common.sh@10 -- # set +x 00:09:34.391 17:12:30 -- nvmf/common.sh@469 -- # nvmfpid=1232080 00:09:34.391 17:12:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.391 17:12:30 -- nvmf/common.sh@470 -- # waitforlisten 1232080 00:09:34.391 17:12:30 -- common/autotest_common.sh@829 -- # '[' -z 1232080 ']' 00:09:34.391 17:12:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.391 17:12:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.391 17:12:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.391 17:12:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.391 17:12:30 -- common/autotest_common.sh@10 -- # set +x 00:09:34.391 [2024-12-14 17:12:30.868949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:34.391 [2024-12-14 17:12:30.868999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.391 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.391 [2024-12-14 17:12:30.942816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.391 [2024-12-14 17:12:30.980823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:34.391 [2024-12-14 17:12:30.980952] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.391 [2024-12-14 17:12:30.980962] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.391 [2024-12-14 17:12:30.980971] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.391 [2024-12-14 17:12:30.981082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.391 [2024-12-14 17:12:30.981181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.391 [2024-12-14 17:12:30.981197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.391 [2024-12-14 17:12:30.981206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.326 17:12:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.326 17:12:31 -- common/autotest_common.sh@862 -- # return 0 00:09:35.326 17:12:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:35.326 17:12:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:35.326 17:12:31 -- common/autotest_common.sh@10 -- # set +x 00:09:35.326 17:12:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.326 17:12:31 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:35.326 17:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.326 17:12:31 -- common/autotest_common.sh@10 -- # set +x 00:09:35.326 [2024-12-14 17:12:31.742002] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:35.326 [2024-12-14 17:12:31.763113] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb000f0/0xb045c0) succeed. 00:09:35.326 [2024-12-14 17:12:31.772305] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb01690/0xb45c60) succeed. 00:09:35.326 17:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.326 17:12:31 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:35.326 17:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.326 17:12:31 -- common/autotest_common.sh@10 -- # set +x 00:09:35.326 17:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.326 17:12:31 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:35.326 17:12:31 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:35.326 17:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.326 17:12:31 -- common/autotest_common.sh@10 -- # set +x 00:09:35.326 17:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.326 17:12:31 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.326 17:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.326 17:12:31 -- common/autotest_common.sh@10 -- # set +x 00:09:35.326 17:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.326 17:12:31 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:35.326 17:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.326 17:12:31 -- common/autotest_common.sh@10 -- # set +x 00:09:35.326 [2024-12-14 17:12:31.912532] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:35.326 17:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.326 17:12:31 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:35.326 17:12:31 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:35.326 17:12:31 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:35.326 17:12:31 -- target/connect_disconnect.sh@34 -- # set +x 00:09:38.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.603 17:17:47 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:50.603 17:17:47 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:50.604 17:17:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:50.604 17:17:47 -- nvmf/common.sh@116 -- # sync 00:14:50.604 17:17:47 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:50.604 17:17:47 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:50.604 17:17:47 -- nvmf/common.sh@119 -- # set +e 00:14:50.604 17:17:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:50.604 17:17:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:50.604 rmmod nvme_rdma 00:14:50.604 rmmod nvme_fabrics 00:14:50.604 17:17:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:50.863 17:17:47 -- nvmf/common.sh@123 -- # set -e 00:14:50.863 17:17:47 -- nvmf/common.sh@124 -- # return 0 00:14:50.863 17:17:47 -- nvmf/common.sh@477 -- # '[' -n 1232080 ']' 00:14:50.863 17:17:47 -- nvmf/common.sh@478 -- # killprocess 1232080 00:14:50.863 17:17:47 -- common/autotest_common.sh@936 -- # '[' -z 1232080 ']' 00:14:50.863 17:17:47 -- common/autotest_common.sh@940 -- # kill -0 1232080 00:14:50.863 17:17:47 -- common/autotest_common.sh@941 -- # uname 00:14:50.863 17:17:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.863 17:17:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1232080 00:14:50.863 17:17:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:50.863 17:17:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:50.863 17:17:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1232080' 00:14:50.863 killing process with pid 1232080 00:14:50.863 17:17:47 -- common/autotest_common.sh@955 -- # kill 1232080 00:14:50.863 17:17:47 -- common/autotest_common.sh@960 -- # wait 1232080 00:14:51.122 17:17:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:51.122 17:17:47 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:51.122 00:14:51.122 real 5m23.836s 00:14:51.122 user 21m3.251s 00:14:51.122 sys 0m18.028s 00:14:51.122 17:17:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:51.122 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:14:51.122 ************************************ 00:14:51.122 END TEST nvmf_connect_disconnect 00:14:51.122 ************************************ 00:14:51.122 17:17:47 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:51.122 17:17:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:51.122 17:17:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.122 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:14:51.122 ************************************ 00:14:51.122 START TEST nvmf_multitarget 00:14:51.122 ************************************ 00:14:51.122 17:17:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:51.122 * Looking for test storage... 00:14:51.122 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:51.122 17:17:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:51.122 17:17:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:51.122 17:17:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:51.382 17:17:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:51.382 17:17:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:51.382 17:17:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:51.382 17:17:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:51.382 17:17:47 -- scripts/common.sh@335 -- # IFS=.-: 00:14:51.382 17:17:47 -- scripts/common.sh@335 -- # read -ra ver1 00:14:51.382 17:17:47 -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.382 17:17:47 -- scripts/common.sh@336 -- # read -ra ver2 00:14:51.382 17:17:47 -- scripts/common.sh@337 -- # local 'op=<' 00:14:51.382 17:17:47 -- scripts/common.sh@339 -- # ver1_l=2 00:14:51.382 17:17:47 -- scripts/common.sh@340 -- # ver2_l=1 00:14:51.382 17:17:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:51.382 17:17:47 -- scripts/common.sh@343 -- # case "$op" in 00:14:51.382 17:17:47 -- scripts/common.sh@344 -- # : 1 00:14:51.382 17:17:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:51.382 17:17:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.382 17:17:47 -- scripts/common.sh@364 -- # decimal 1 00:14:51.382 17:17:47 -- scripts/common.sh@352 -- # local d=1 00:14:51.382 17:17:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.382 17:17:47 -- scripts/common.sh@354 -- # echo 1 00:14:51.382 17:17:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:51.382 17:17:47 -- scripts/common.sh@365 -- # decimal 2 00:14:51.382 17:17:47 -- scripts/common.sh@352 -- # local d=2 00:14:51.382 17:17:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.382 17:17:47 -- scripts/common.sh@354 -- # echo 2 00:14:51.382 17:17:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:51.382 17:17:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:51.382 17:17:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:51.382 17:17:47 -- scripts/common.sh@367 -- # return 0 00:14:51.382 17:17:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.382 17:17:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:51.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.382 --rc genhtml_branch_coverage=1 00:14:51.382 --rc genhtml_function_coverage=1 00:14:51.382 --rc genhtml_legend=1 00:14:51.382 --rc geninfo_all_blocks=1 00:14:51.382 --rc geninfo_unexecuted_blocks=1 00:14:51.382 00:14:51.382 ' 00:14:51.382 17:17:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:51.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.382 --rc genhtml_branch_coverage=1 00:14:51.382 --rc genhtml_function_coverage=1 00:14:51.382 --rc genhtml_legend=1 00:14:51.382 --rc geninfo_all_blocks=1 00:14:51.382 --rc geninfo_unexecuted_blocks=1 00:14:51.382 00:14:51.382 ' 00:14:51.382 17:17:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:51.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.382 --rc genhtml_branch_coverage=1 00:14:51.382 --rc genhtml_function_coverage=1 00:14:51.382 --rc genhtml_legend=1 00:14:51.382 --rc geninfo_all_blocks=1 00:14:51.382 --rc geninfo_unexecuted_blocks=1 00:14:51.382 00:14:51.382 ' 00:14:51.382 17:17:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:51.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.382 --rc genhtml_branch_coverage=1 00:14:51.382 --rc genhtml_function_coverage=1 00:14:51.382 --rc genhtml_legend=1 00:14:51.382 --rc geninfo_all_blocks=1 00:14:51.382 --rc geninfo_unexecuted_blocks=1 00:14:51.382 00:14:51.382 ' 00:14:51.382 17:17:47 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.382 17:17:47 -- nvmf/common.sh@7 -- # uname -s 00:14:51.382 17:17:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.382 17:17:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.382 17:17:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.382 17:17:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.382 17:17:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.382 17:17:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.382 17:17:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.382 17:17:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.382 17:17:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.382 17:17:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.382 17:17:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:51.382 17:17:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:51.382 17:17:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.382 17:17:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.382 17:17:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.382 17:17:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:51.382 17:17:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.382 17:17:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.382 17:17:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.382 17:17:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.382 17:17:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.382 17:17:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.382 17:17:47 -- paths/export.sh@5 -- # export PATH 00:14:51.382 17:17:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.382 17:17:47 -- nvmf/common.sh@46 -- # : 0 00:14:51.382 17:17:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:51.382 17:17:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:51.382 17:17:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:51.382 17:17:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.382 17:17:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.382 17:17:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:51.382 17:17:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:51.382 17:17:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:51.382 17:17:47 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:51.382 17:17:47 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:51.382 17:17:47 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:51.382 17:17:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.382 17:17:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:51.382 17:17:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:51.382 17:17:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:51.382 17:17:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.382 17:17:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.382 17:17:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.382 17:17:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:51.382 17:17:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:51.382 17:17:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:51.382 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:14:57.951 17:17:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:57.951 17:17:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:57.951 17:17:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:57.951 17:17:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:57.951 17:17:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:57.951 17:17:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:57.951 17:17:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:57.951 17:17:54 -- nvmf/common.sh@294 -- # net_devs=() 00:14:57.951 17:17:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:57.951 17:17:54 -- nvmf/common.sh@295 -- # e810=() 00:14:57.951 17:17:54 -- nvmf/common.sh@295 -- # local -ga e810 00:14:57.951 17:17:54 -- nvmf/common.sh@296 -- # x722=() 00:14:57.951 17:17:54 -- nvmf/common.sh@296 -- # local -ga x722 00:14:57.951 17:17:54 -- nvmf/common.sh@297 -- # mlx=() 00:14:57.951 17:17:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:57.951 17:17:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.951 17:17:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.951 17:17:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.951 17:17:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.951 17:17:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.951 17:17:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.951 17:17:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.951 17:17:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.951 17:17:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.951 17:17:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.951 17:17:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.951 17:17:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:57.951 17:17:54 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:57.951 17:17:54 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:57.951 17:17:54 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:57.951 17:17:54 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:57.951 17:17:54 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:57.951 17:17:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:57.951 17:17:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:57.951 17:17:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:57.951 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:57.952 17:17:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:57.952 17:17:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:57.952 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:57.952 17:17:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:57.952 17:17:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:57.952 17:17:54 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.952 17:17:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:57.952 17:17:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.952 17:17:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:57.952 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:57.952 17:17:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.952 17:17:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.952 17:17:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:57.952 17:17:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.952 17:17:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:57.952 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:57.952 17:17:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.952 17:17:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:57.952 17:17:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:57.952 17:17:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:57.952 17:17:54 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:57.952 17:17:54 -- nvmf/common.sh@57 -- # uname 00:14:57.952 17:17:54 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:57.952 17:17:54 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:57.952 17:17:54 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:57.952 17:17:54 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:57.952 17:17:54 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:57.952 17:17:54 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:57.952 17:17:54 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:57.952 17:17:54 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:57.952 17:17:54 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:57.952 17:17:54 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:57.952 17:17:54 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:57.952 17:17:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:57.952 17:17:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:57.952 17:17:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:57.952 17:17:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:57.952 17:17:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:57.952 17:17:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:57.952 17:17:54 -- nvmf/common.sh@104 -- # continue 2 00:14:57.952 17:17:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:57.952 17:17:54 -- nvmf/common.sh@104 -- # continue 2 00:14:57.952 17:17:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:57.952 17:17:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:57.952 17:17:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:57.952 17:17:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:57.952 17:17:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:57.952 17:17:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:57.952 17:17:54 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:57.952 17:17:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:57.952 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:57.952 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:57.952 altname enp217s0f0np0 00:14:57.952 altname ens818f0np0 00:14:57.952 inet 192.168.100.8/24 scope global mlx_0_0 00:14:57.952 valid_lft forever preferred_lft forever 00:14:57.952 17:17:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:57.952 17:17:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:57.952 17:17:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:57.952 17:17:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:57.952 17:17:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:57.952 17:17:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:57.952 17:17:54 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:57.952 17:17:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:57.952 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:57.952 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:57.952 altname enp217s0f1np1 00:14:57.952 altname ens818f1np1 00:14:57.952 inet 192.168.100.9/24 scope global mlx_0_1 00:14:57.952 valid_lft forever preferred_lft forever 00:14:57.952 17:17:54 -- nvmf/common.sh@410 -- # return 0 00:14:57.952 17:17:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:57.952 17:17:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:57.952 17:17:54 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:57.952 17:17:54 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:57.952 17:17:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:57.952 17:17:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:57.952 17:17:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:57.952 17:17:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:57.952 17:17:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:57.952 17:17:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:57.952 17:17:54 -- nvmf/common.sh@104 -- # continue 2 00:14:57.952 17:17:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.952 17:17:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:57.952 17:17:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:57.952 17:17:54 -- nvmf/common.sh@104 -- # continue 2 00:14:57.952 17:17:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:57.952 17:17:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:57.952 17:17:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:57.952 17:17:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:57.952 17:17:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:57.952 17:17:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:57.952 17:17:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:57.952 17:17:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:57.952 17:17:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:57.952 17:17:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:57.952 17:17:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:57.952 17:17:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:57.952 17:17:54 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:57.952 192.168.100.9' 00:14:57.952 17:17:54 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:57.952 192.168.100.9' 00:14:57.952 17:17:54 -- nvmf/common.sh@445 -- # head -n 1 00:14:57.952 17:17:54 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:57.952 17:17:54 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:57.952 192.168.100.9' 00:14:57.952 17:17:54 -- nvmf/common.sh@446 -- # tail -n +2 00:14:57.952 17:17:54 -- nvmf/common.sh@446 -- # head -n 1 00:14:57.952 17:17:54 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:57.952 17:17:54 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:57.952 17:17:54 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:57.952 17:17:54 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:57.952 17:17:54 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:57.952 17:17:54 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:57.952 17:17:54 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:57.952 17:17:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:57.952 17:17:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:57.952 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:14:57.952 17:17:54 -- nvmf/common.sh@469 -- # nvmfpid=1291977 00:14:57.952 17:17:54 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:57.952 17:17:54 -- nvmf/common.sh@470 -- # waitforlisten 1291977 00:14:57.952 17:17:54 -- common/autotest_common.sh@829 -- # '[' -z 1291977 ']' 00:14:57.952 17:17:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.952 17:17:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.952 17:17:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.952 17:17:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.952 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:14:57.953 [2024-12-14 17:17:54.458061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:57.953 [2024-12-14 17:17:54.458109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.953 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.953 [2024-12-14 17:17:54.528706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.953 [2024-12-14 17:17:54.566128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.953 [2024-12-14 17:17:54.566267] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.953 [2024-12-14 17:17:54.566278] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.953 [2024-12-14 17:17:54.566287] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.953 [2024-12-14 17:17:54.566398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.953 [2024-12-14 17:17:54.566492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.953 [2024-12-14 17:17:54.566578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.953 [2024-12-14 17:17:54.566580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.899 17:17:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.899 17:17:55 -- common/autotest_common.sh@862 -- # return 0 00:14:58.899 17:17:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:58.899 17:17:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.899 17:17:55 -- common/autotest_common.sh@10 -- # set +x 00:14:58.899 17:17:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.899 17:17:55 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:58.899 17:17:55 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:58.899 17:17:55 -- target/multitarget.sh@21 -- # jq length 00:14:58.899 17:17:55 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:58.899 17:17:55 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:58.899 "nvmf_tgt_1" 00:14:58.899 17:17:55 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:59.157 "nvmf_tgt_2" 00:14:59.157 17:17:55 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:59.157 17:17:55 -- target/multitarget.sh@28 -- # jq length 00:14:59.157 17:17:55 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:59.157 17:17:55 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:59.157 true 00:14:59.416 17:17:55 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:59.416 true 00:14:59.416 17:17:55 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:59.416 17:17:55 -- target/multitarget.sh@35 -- # jq length 00:14:59.416 17:17:56 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:59.416 17:17:56 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:59.416 17:17:56 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:59.416 17:17:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:59.416 17:17:56 -- nvmf/common.sh@116 -- # sync 00:14:59.416 17:17:56 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:59.416 17:17:56 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:59.416 17:17:56 -- nvmf/common.sh@119 -- # set +e 00:14:59.416 17:17:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:59.416 17:17:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:59.416 rmmod nvme_rdma 00:14:59.416 rmmod nvme_fabrics 00:14:59.416 17:17:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:59.675 17:17:56 -- nvmf/common.sh@123 -- # set -e 00:14:59.675 17:17:56 -- nvmf/common.sh@124 -- # return 0 00:14:59.675 17:17:56 -- nvmf/common.sh@477 -- # '[' -n 1291977 ']' 00:14:59.675 17:17:56 -- nvmf/common.sh@478 -- # killprocess 1291977 00:14:59.675 17:17:56 -- common/autotest_common.sh@936 -- # '[' -z 1291977 ']' 00:14:59.675 17:17:56 -- common/autotest_common.sh@940 -- # kill -0 1291977 00:14:59.675 17:17:56 -- common/autotest_common.sh@941 -- # uname 00:14:59.675 17:17:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.675 17:17:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1291977 00:14:59.675 17:17:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:59.675 17:17:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:59.675 17:17:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1291977' 00:14:59.675 killing process with pid 1291977 00:14:59.675 17:17:56 -- common/autotest_common.sh@955 -- # kill 1291977 00:14:59.675 17:17:56 -- common/autotest_common.sh@960 -- # wait 1291977 00:14:59.675 17:17:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:59.675 17:17:56 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:59.675 00:14:59.675 real 0m8.677s 00:14:59.675 user 0m9.689s 00:14:59.675 sys 0m5.530s 00:14:59.675 17:17:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:59.675 17:17:56 -- common/autotest_common.sh@10 -- # set +x 00:14:59.675 ************************************ 00:14:59.675 END TEST nvmf_multitarget 00:14:59.675 ************************************ 00:14:59.934 17:17:56 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:59.934 17:17:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:59.934 17:17:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.934 17:17:56 -- common/autotest_common.sh@10 -- # set +x 00:14:59.934 ************************************ 00:14:59.934 START TEST nvmf_rpc 00:14:59.934 ************************************ 00:14:59.934 17:17:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:59.934 * Looking for test storage... 00:14:59.934 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:59.934 17:17:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:59.934 17:17:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:59.934 17:17:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:59.934 17:17:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:59.934 17:17:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:59.934 17:17:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:59.934 17:17:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:59.934 17:17:56 -- scripts/common.sh@335 -- # IFS=.-: 00:14:59.934 17:17:56 -- scripts/common.sh@335 -- # read -ra ver1 00:14:59.934 17:17:56 -- scripts/common.sh@336 -- # IFS=.-: 00:14:59.934 17:17:56 -- scripts/common.sh@336 -- # read -ra ver2 00:14:59.934 17:17:56 -- scripts/common.sh@337 -- # local 'op=<' 00:14:59.934 17:17:56 -- scripts/common.sh@339 -- # ver1_l=2 00:14:59.934 17:17:56 -- scripts/common.sh@340 -- # ver2_l=1 00:14:59.934 17:17:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:59.934 17:17:56 -- scripts/common.sh@343 -- # case "$op" in 00:14:59.934 17:17:56 -- scripts/common.sh@344 -- # : 1 00:14:59.934 17:17:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:59.934 17:17:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.934 17:17:56 -- scripts/common.sh@364 -- # decimal 1 00:14:59.934 17:17:56 -- scripts/common.sh@352 -- # local d=1 00:14:59.934 17:17:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:59.934 17:17:56 -- scripts/common.sh@354 -- # echo 1 00:14:59.934 17:17:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:59.934 17:17:56 -- scripts/common.sh@365 -- # decimal 2 00:14:59.934 17:17:56 -- scripts/common.sh@352 -- # local d=2 00:14:59.934 17:17:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:59.934 17:17:56 -- scripts/common.sh@354 -- # echo 2 00:14:59.934 17:17:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:59.934 17:17:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:59.934 17:17:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:59.934 17:17:56 -- scripts/common.sh@367 -- # return 0 00:14:59.934 17:17:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:59.934 17:17:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:59.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.934 --rc genhtml_branch_coverage=1 00:14:59.934 --rc genhtml_function_coverage=1 00:14:59.934 --rc genhtml_legend=1 00:14:59.934 --rc geninfo_all_blocks=1 00:14:59.934 --rc geninfo_unexecuted_blocks=1 00:14:59.934 00:14:59.934 ' 00:14:59.934 17:17:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:59.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.934 --rc genhtml_branch_coverage=1 00:14:59.934 --rc genhtml_function_coverage=1 00:14:59.934 --rc genhtml_legend=1 00:14:59.934 --rc geninfo_all_blocks=1 00:14:59.934 --rc geninfo_unexecuted_blocks=1 00:14:59.934 00:14:59.934 ' 00:14:59.934 17:17:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:59.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.934 --rc genhtml_branch_coverage=1 00:14:59.934 --rc genhtml_function_coverage=1 00:14:59.934 --rc genhtml_legend=1 00:14:59.934 --rc geninfo_all_blocks=1 00:14:59.934 --rc geninfo_unexecuted_blocks=1 00:14:59.934 00:14:59.934 ' 00:14:59.934 17:17:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:59.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.934 --rc genhtml_branch_coverage=1 00:14:59.934 --rc genhtml_function_coverage=1 00:14:59.934 --rc genhtml_legend=1 00:14:59.934 --rc geninfo_all_blocks=1 00:14:59.934 --rc geninfo_unexecuted_blocks=1 00:14:59.934 00:14:59.934 ' 00:14:59.934 17:17:56 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.934 17:17:56 -- nvmf/common.sh@7 -- # uname -s 00:14:59.934 17:17:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.934 17:17:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.934 17:17:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.934 17:17:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.934 17:17:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.934 17:17:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.935 17:17:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.935 17:17:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.935 17:17:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.935 17:17:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.935 17:17:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:59.935 17:17:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:59.935 17:17:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.935 17:17:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.935 17:17:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.935 17:17:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:59.935 17:17:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.935 17:17:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.935 17:17:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.935 17:17:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.935 17:17:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.935 17:17:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.935 17:17:56 -- paths/export.sh@5 -- # export PATH 00:14:59.935 17:17:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.935 17:17:56 -- nvmf/common.sh@46 -- # : 0 00:14:59.935 17:17:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:59.935 17:17:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:59.935 17:17:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:59.935 17:17:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.935 17:17:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.935 17:17:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:59.935 17:17:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:59.935 17:17:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:59.935 17:17:56 -- target/rpc.sh@11 -- # loops=5 00:14:59.935 17:17:56 -- target/rpc.sh@23 -- # nvmftestinit 00:14:59.935 17:17:56 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:59.935 17:17:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.935 17:17:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:59.935 17:17:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:59.935 17:17:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:59.935 17:17:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.935 17:17:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.935 17:17:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.935 17:17:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:59.935 17:17:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:59.935 17:17:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:59.935 17:17:56 -- common/autotest_common.sh@10 -- # set +x 00:15:06.502 17:18:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:06.502 17:18:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:06.502 17:18:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:06.502 17:18:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:06.502 17:18:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:06.502 17:18:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:06.502 17:18:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:06.502 17:18:02 -- nvmf/common.sh@294 -- # net_devs=() 00:15:06.502 17:18:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:06.502 17:18:02 -- nvmf/common.sh@295 -- # e810=() 00:15:06.502 17:18:02 -- nvmf/common.sh@295 -- # local -ga e810 00:15:06.502 17:18:02 -- nvmf/common.sh@296 -- # x722=() 00:15:06.502 17:18:02 -- nvmf/common.sh@296 -- # local -ga x722 00:15:06.502 17:18:02 -- nvmf/common.sh@297 -- # mlx=() 00:15:06.502 17:18:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:06.502 17:18:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.502 17:18:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.502 17:18:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.502 17:18:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.502 17:18:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.502 17:18:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.502 17:18:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.502 17:18:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.502 17:18:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.502 17:18:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.502 17:18:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.502 17:18:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:06.502 17:18:02 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:06.502 17:18:02 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:06.502 17:18:02 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:06.502 17:18:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:06.502 17:18:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:06.502 17:18:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:06.502 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:06.502 17:18:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:06.502 17:18:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:06.502 17:18:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:06.502 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:06.502 17:18:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:06.502 17:18:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:06.502 17:18:02 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:06.502 17:18:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.502 17:18:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:06.502 17:18:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.502 17:18:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:06.502 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:06.502 17:18:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.502 17:18:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:06.502 17:18:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.502 17:18:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:06.502 17:18:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.502 17:18:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:06.502 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:06.502 17:18:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.502 17:18:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:06.502 17:18:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:06.502 17:18:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:06.502 17:18:02 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:06.502 17:18:02 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:06.502 17:18:02 -- nvmf/common.sh@57 -- # uname 00:15:06.502 17:18:02 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:06.502 17:18:02 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:06.502 17:18:02 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:06.502 17:18:02 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:06.502 17:18:02 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:06.502 17:18:02 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:06.502 17:18:02 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:06.502 17:18:02 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:06.502 17:18:02 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:06.502 17:18:02 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:06.502 17:18:02 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:06.502 17:18:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:06.502 17:18:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:06.502 17:18:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:06.502 17:18:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:06.502 17:18:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:06.502 17:18:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:06.502 17:18:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.502 17:18:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:06.502 17:18:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:06.502 17:18:03 -- nvmf/common.sh@104 -- # continue 2 00:15:06.502 17:18:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:06.502 17:18:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.502 17:18:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:06.502 17:18:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.502 17:18:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:06.502 17:18:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:06.502 17:18:03 -- nvmf/common.sh@104 -- # continue 2 00:15:06.502 17:18:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:06.502 17:18:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:06.502 17:18:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:06.502 17:18:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:06.502 17:18:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:06.502 17:18:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:06.502 17:18:03 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:06.502 17:18:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:06.502 17:18:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:06.502 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:06.502 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:06.502 altname enp217s0f0np0 00:15:06.502 altname ens818f0np0 00:15:06.502 inet 192.168.100.8/24 scope global mlx_0_0 00:15:06.502 valid_lft forever preferred_lft forever 00:15:06.502 17:18:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:06.502 17:18:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:06.502 17:18:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:06.502 17:18:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:06.502 17:18:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:06.502 17:18:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:06.502 17:18:03 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:06.502 17:18:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:06.502 17:18:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:06.502 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:06.502 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:06.502 altname enp217s0f1np1 00:15:06.502 altname ens818f1np1 00:15:06.502 inet 192.168.100.9/24 scope global mlx_0_1 00:15:06.503 valid_lft forever preferred_lft forever 00:15:06.503 17:18:03 -- nvmf/common.sh@410 -- # return 0 00:15:06.503 17:18:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:06.503 17:18:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:06.503 17:18:03 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:06.503 17:18:03 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:06.503 17:18:03 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:06.503 17:18:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:06.503 17:18:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:06.503 17:18:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:06.503 17:18:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:06.503 17:18:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:06.503 17:18:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:06.503 17:18:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.503 17:18:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:06.503 17:18:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:06.503 17:18:03 -- nvmf/common.sh@104 -- # continue 2 00:15:06.503 17:18:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:06.503 17:18:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.503 17:18:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:06.503 17:18:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.503 17:18:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:06.503 17:18:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:06.503 17:18:03 -- nvmf/common.sh@104 -- # continue 2 00:15:06.503 17:18:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:06.503 17:18:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:06.503 17:18:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:06.503 17:18:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:06.503 17:18:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:06.503 17:18:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:06.503 17:18:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:06.503 17:18:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:06.503 17:18:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:06.503 17:18:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:06.503 17:18:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:06.503 17:18:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:06.503 17:18:03 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:06.503 192.168.100.9' 00:15:06.503 17:18:03 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:06.503 192.168.100.9' 00:15:06.503 17:18:03 -- nvmf/common.sh@445 -- # head -n 1 00:15:06.503 17:18:03 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:06.503 17:18:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:06.503 192.168.100.9' 00:15:06.503 17:18:03 -- nvmf/common.sh@446 -- # tail -n +2 00:15:06.503 17:18:03 -- nvmf/common.sh@446 -- # head -n 1 00:15:06.503 17:18:03 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:06.503 17:18:03 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:06.503 17:18:03 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:06.503 17:18:03 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:06.503 17:18:03 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:06.503 17:18:03 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:06.503 17:18:03 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:06.503 17:18:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:06.503 17:18:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.503 17:18:03 -- common/autotest_common.sh@10 -- # set +x 00:15:06.503 17:18:03 -- nvmf/common.sh@469 -- # nvmfpid=1295726 00:15:06.503 17:18:03 -- nvmf/common.sh@470 -- # waitforlisten 1295726 00:15:06.503 17:18:03 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:06.503 17:18:03 -- common/autotest_common.sh@829 -- # '[' -z 1295726 ']' 00:15:06.503 17:18:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.503 17:18:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.503 17:18:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.503 17:18:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.503 17:18:03 -- common/autotest_common.sh@10 -- # set +x 00:15:06.762 [2024-12-14 17:18:03.221131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:06.762 [2024-12-14 17:18:03.221189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.762 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.762 [2024-12-14 17:18:03.292638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:06.762 [2024-12-14 17:18:03.331474] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:06.762 [2024-12-14 17:18:03.331592] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.762 [2024-12-14 17:18:03.331601] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.762 [2024-12-14 17:18:03.331609] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.762 [2024-12-14 17:18:03.331662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.762 [2024-12-14 17:18:03.331773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.762 [2024-12-14 17:18:03.331836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.763 [2024-12-14 17:18:03.331838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.699 17:18:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.699 17:18:04 -- common/autotest_common.sh@862 -- # return 0 00:15:07.699 17:18:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:07.699 17:18:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.699 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:07.699 17:18:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.699 17:18:04 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:07.699 17:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.699 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:07.699 17:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.699 17:18:04 -- target/rpc.sh@26 -- # stats='{ 00:15:07.699 "tick_rate": 2500000000, 00:15:07.699 "poll_groups": [ 00:15:07.699 { 00:15:07.699 "name": "nvmf_tgt_poll_group_0", 00:15:07.699 "admin_qpairs": 0, 00:15:07.699 "io_qpairs": 0, 00:15:07.699 "current_admin_qpairs": 0, 00:15:07.699 "current_io_qpairs": 0, 00:15:07.699 "pending_bdev_io": 0, 00:15:07.699 "completed_nvme_io": 0, 00:15:07.699 "transports": [] 00:15:07.699 }, 00:15:07.699 { 00:15:07.699 "name": "nvmf_tgt_poll_group_1", 00:15:07.699 "admin_qpairs": 0, 00:15:07.699 "io_qpairs": 0, 00:15:07.699 "current_admin_qpairs": 0, 00:15:07.699 "current_io_qpairs": 0, 00:15:07.699 "pending_bdev_io": 0, 00:15:07.699 "completed_nvme_io": 0, 00:15:07.699 "transports": [] 00:15:07.699 }, 00:15:07.699 { 00:15:07.699 "name": "nvmf_tgt_poll_group_2", 00:15:07.699 "admin_qpairs": 0, 00:15:07.699 "io_qpairs": 0, 00:15:07.699 "current_admin_qpairs": 0, 00:15:07.699 "current_io_qpairs": 0, 00:15:07.699 "pending_bdev_io": 0, 00:15:07.699 "completed_nvme_io": 0, 00:15:07.699 "transports": [] 00:15:07.699 }, 00:15:07.699 { 00:15:07.699 "name": "nvmf_tgt_poll_group_3", 00:15:07.699 "admin_qpairs": 0, 00:15:07.699 "io_qpairs": 0, 00:15:07.699 "current_admin_qpairs": 0, 00:15:07.699 "current_io_qpairs": 0, 00:15:07.699 "pending_bdev_io": 0, 00:15:07.699 "completed_nvme_io": 0, 00:15:07.699 "transports": [] 00:15:07.699 } 00:15:07.699 ] 00:15:07.699 }' 00:15:07.699 17:18:04 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:07.699 17:18:04 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:07.699 17:18:04 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:07.699 17:18:04 -- target/rpc.sh@15 -- # wc -l 00:15:07.699 17:18:04 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:07.699 17:18:04 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:07.700 17:18:04 -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:07.700 17:18:04 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:07.700 17:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.700 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:07.700 [2024-12-14 17:18:04.238732] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe22130/0xe26600) succeed. 00:15:07.700 [2024-12-14 17:18:04.248215] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe236d0/0xe67ca0) succeed. 00:15:07.700 17:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.700 17:18:04 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:07.700 17:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.700 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:07.959 17:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.959 17:18:04 -- target/rpc.sh@33 -- # stats='{ 00:15:07.959 "tick_rate": 2500000000, 00:15:07.959 "poll_groups": [ 00:15:07.959 { 00:15:07.959 "name": "nvmf_tgt_poll_group_0", 00:15:07.959 "admin_qpairs": 0, 00:15:07.959 "io_qpairs": 0, 00:15:07.959 "current_admin_qpairs": 0, 00:15:07.959 "current_io_qpairs": 0, 00:15:07.959 "pending_bdev_io": 0, 00:15:07.959 "completed_nvme_io": 0, 00:15:07.959 "transports": [ 00:15:07.959 { 00:15:07.959 "trtype": "RDMA", 00:15:07.959 "pending_data_buffer": 0, 00:15:07.959 "devices": [ 00:15:07.959 { 00:15:07.959 "name": "mlx5_0", 00:15:07.959 "polls": 15988, 00:15:07.959 "idle_polls": 15988, 00:15:07.959 "completions": 0, 00:15:07.959 "requests": 0, 00:15:07.959 "request_latency": 0, 00:15:07.959 "pending_free_request": 0, 00:15:07.959 "pending_rdma_read": 0, 00:15:07.959 "pending_rdma_write": 0, 00:15:07.959 "pending_rdma_send": 0, 00:15:07.959 "total_send_wrs": 0, 00:15:07.959 "send_doorbell_updates": 0, 00:15:07.959 "total_recv_wrs": 4096, 00:15:07.959 "recv_doorbell_updates": 1 00:15:07.959 }, 00:15:07.959 { 00:15:07.959 "name": "mlx5_1", 00:15:07.959 "polls": 15988, 00:15:07.959 "idle_polls": 15988, 00:15:07.959 "completions": 0, 00:15:07.959 "requests": 0, 00:15:07.959 "request_latency": 0, 00:15:07.959 "pending_free_request": 0, 00:15:07.959 "pending_rdma_read": 0, 00:15:07.959 "pending_rdma_write": 0, 00:15:07.959 "pending_rdma_send": 0, 00:15:07.959 "total_send_wrs": 0, 00:15:07.959 "send_doorbell_updates": 0, 00:15:07.959 "total_recv_wrs": 4096, 00:15:07.959 "recv_doorbell_updates": 1 00:15:07.959 } 00:15:07.959 ] 00:15:07.959 } 00:15:07.959 ] 00:15:07.959 }, 00:15:07.959 { 00:15:07.959 "name": "nvmf_tgt_poll_group_1", 00:15:07.959 "admin_qpairs": 0, 00:15:07.959 "io_qpairs": 0, 00:15:07.959 "current_admin_qpairs": 0, 00:15:07.959 "current_io_qpairs": 0, 00:15:07.959 "pending_bdev_io": 0, 00:15:07.959 "completed_nvme_io": 0, 00:15:07.959 "transports": [ 00:15:07.959 { 00:15:07.959 "trtype": "RDMA", 00:15:07.959 "pending_data_buffer": 0, 00:15:07.959 "devices": [ 00:15:07.959 { 00:15:07.959 "name": "mlx5_0", 00:15:07.959 "polls": 10195, 00:15:07.959 "idle_polls": 10195, 00:15:07.959 "completions": 0, 00:15:07.959 "requests": 0, 00:15:07.959 "request_latency": 0, 00:15:07.959 "pending_free_request": 0, 00:15:07.959 "pending_rdma_read": 0, 00:15:07.959 "pending_rdma_write": 0, 00:15:07.959 "pending_rdma_send": 0, 00:15:07.959 "total_send_wrs": 0, 00:15:07.959 "send_doorbell_updates": 0, 00:15:07.959 "total_recv_wrs": 4096, 00:15:07.959 "recv_doorbell_updates": 1 00:15:07.959 }, 00:15:07.959 { 00:15:07.959 "name": "mlx5_1", 00:15:07.959 "polls": 10195, 00:15:07.959 "idle_polls": 10195, 00:15:07.959 "completions": 0, 00:15:07.959 "requests": 0, 00:15:07.959 "request_latency": 0, 00:15:07.959 "pending_free_request": 0, 00:15:07.959 "pending_rdma_read": 0, 00:15:07.959 "pending_rdma_write": 0, 00:15:07.959 "pending_rdma_send": 0, 00:15:07.959 "total_send_wrs": 0, 00:15:07.959 "send_doorbell_updates": 0, 00:15:07.959 "total_recv_wrs": 4096, 00:15:07.959 "recv_doorbell_updates": 1 00:15:07.959 } 00:15:07.959 ] 00:15:07.959 } 00:15:07.959 ] 00:15:07.959 }, 00:15:07.959 { 00:15:07.959 "name": "nvmf_tgt_poll_group_2", 00:15:07.959 "admin_qpairs": 0, 00:15:07.959 "io_qpairs": 0, 00:15:07.959 "current_admin_qpairs": 0, 00:15:07.959 "current_io_qpairs": 0, 00:15:07.959 "pending_bdev_io": 0, 00:15:07.959 "completed_nvme_io": 0, 00:15:07.959 "transports": [ 00:15:07.959 { 00:15:07.959 "trtype": "RDMA", 00:15:07.959 "pending_data_buffer": 0, 00:15:07.959 "devices": [ 00:15:07.959 { 00:15:07.959 "name": "mlx5_0", 00:15:07.959 "polls": 5755, 00:15:07.959 "idle_polls": 5755, 00:15:07.959 "completions": 0, 00:15:07.959 "requests": 0, 00:15:07.959 "request_latency": 0, 00:15:07.959 "pending_free_request": 0, 00:15:07.959 "pending_rdma_read": 0, 00:15:07.959 "pending_rdma_write": 0, 00:15:07.959 "pending_rdma_send": 0, 00:15:07.959 "total_send_wrs": 0, 00:15:07.959 "send_doorbell_updates": 0, 00:15:07.959 "total_recv_wrs": 4096, 00:15:07.959 "recv_doorbell_updates": 1 00:15:07.959 }, 00:15:07.959 { 00:15:07.959 "name": "mlx5_1", 00:15:07.959 "polls": 5755, 00:15:07.959 "idle_polls": 5755, 00:15:07.959 "completions": 0, 00:15:07.959 "requests": 0, 00:15:07.959 "request_latency": 0, 00:15:07.959 "pending_free_request": 0, 00:15:07.959 "pending_rdma_read": 0, 00:15:07.959 "pending_rdma_write": 0, 00:15:07.959 "pending_rdma_send": 0, 00:15:07.959 "total_send_wrs": 0, 00:15:07.959 "send_doorbell_updates": 0, 00:15:07.959 "total_recv_wrs": 4096, 00:15:07.959 "recv_doorbell_updates": 1 00:15:07.959 } 00:15:07.959 ] 00:15:07.959 } 00:15:07.959 ] 00:15:07.959 }, 00:15:07.959 { 00:15:07.959 "name": "nvmf_tgt_poll_group_3", 00:15:07.959 "admin_qpairs": 0, 00:15:07.959 "io_qpairs": 0, 00:15:07.959 "current_admin_qpairs": 0, 00:15:07.959 "current_io_qpairs": 0, 00:15:07.959 "pending_bdev_io": 0, 00:15:07.959 "completed_nvme_io": 0, 00:15:07.959 "transports": [ 00:15:07.959 { 00:15:07.959 "trtype": "RDMA", 00:15:07.959 "pending_data_buffer": 0, 00:15:07.959 "devices": [ 00:15:07.959 { 00:15:07.959 "name": "mlx5_0", 00:15:07.959 "polls": 918, 00:15:07.959 "idle_polls": 918, 00:15:07.959 "completions": 0, 00:15:07.959 "requests": 0, 00:15:07.959 "request_latency": 0, 00:15:07.959 "pending_free_request": 0, 00:15:07.959 "pending_rdma_read": 0, 00:15:07.959 "pending_rdma_write": 0, 00:15:07.959 "pending_rdma_send": 0, 00:15:07.959 "total_send_wrs": 0, 00:15:07.959 "send_doorbell_updates": 0, 00:15:07.959 "total_recv_wrs": 4096, 00:15:07.959 "recv_doorbell_updates": 1 00:15:07.959 }, 00:15:07.959 { 00:15:07.959 "name": "mlx5_1", 00:15:07.959 "polls": 918, 00:15:07.959 "idle_polls": 918, 00:15:07.959 "completions": 0, 00:15:07.959 "requests": 0, 00:15:07.959 "request_latency": 0, 00:15:07.959 "pending_free_request": 0, 00:15:07.959 "pending_rdma_read": 0, 00:15:07.959 "pending_rdma_write": 0, 00:15:07.959 "pending_rdma_send": 0, 00:15:07.959 "total_send_wrs": 0, 00:15:07.959 "send_doorbell_updates": 0, 00:15:07.959 "total_recv_wrs": 4096, 00:15:07.959 "recv_doorbell_updates": 1 00:15:07.959 } 00:15:07.959 ] 00:15:07.959 } 00:15:07.959 ] 00:15:07.959 } 00:15:07.959 ] 00:15:07.959 }' 00:15:07.959 17:18:04 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:07.959 17:18:04 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:07.959 17:18:04 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:07.959 17:18:04 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:07.959 17:18:04 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:07.959 17:18:04 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:07.959 17:18:04 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:07.959 17:18:04 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:07.959 17:18:04 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:07.959 17:18:04 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:07.959 17:18:04 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:15:07.959 17:18:04 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:15:07.959 17:18:04 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:15:07.959 17:18:04 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:15:07.959 17:18:04 -- target/rpc.sh@15 -- # wc -l 00:15:07.959 17:18:04 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:15:07.959 17:18:04 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:15:07.959 17:18:04 -- target/rpc.sh@41 -- # transport_type=RDMA 00:15:07.959 17:18:04 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:15:07.959 17:18:04 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:15:07.959 17:18:04 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:15:07.959 17:18:04 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:15:07.959 17:18:04 -- target/rpc.sh@15 -- # wc -l 00:15:07.959 17:18:04 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:15:07.959 17:18:04 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:07.959 17:18:04 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:07.959 17:18:04 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:07.959 17:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.959 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:08.219 Malloc1 00:15:08.219 17:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.219 17:18:04 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:08.219 17:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.219 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:08.219 17:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.219 17:18:04 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:08.219 17:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.219 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:08.219 17:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.219 17:18:04 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:08.219 17:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.219 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:08.219 17:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.219 17:18:04 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:08.219 17:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.219 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:08.219 [2024-12-14 17:18:04.695822] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:08.219 17:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.219 17:18:04 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:15:08.219 17:18:04 -- common/autotest_common.sh@650 -- # local es=0 00:15:08.219 17:18:04 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:15:08.219 17:18:04 -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:08.219 17:18:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.219 17:18:04 -- common/autotest_common.sh@642 -- # type -t nvme 00:15:08.219 17:18:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.219 17:18:04 -- common/autotest_common.sh@644 -- # type -P nvme 00:15:08.219 17:18:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.219 17:18:04 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:08.219 17:18:04 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:08.219 17:18:04 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:15:08.219 [2024-12-14 17:18:04.741618] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:15:08.219 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:08.219 could not add new controller: failed to write to nvme-fabrics device 00:15:08.219 17:18:04 -- common/autotest_common.sh@653 -- # es=1 00:15:08.219 17:18:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:08.219 17:18:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:08.219 17:18:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:08.219 17:18:04 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:08.219 17:18:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.219 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:08.219 17:18:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.219 17:18:04 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:09.155 17:18:05 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:09.155 17:18:05 -- common/autotest_common.sh@1187 -- # local i=0 00:15:09.155 17:18:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.155 17:18:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:09.155 17:18:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:11.688 17:18:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:11.688 17:18:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:11.688 17:18:07 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.688 17:18:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:11.688 17:18:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.688 17:18:07 -- common/autotest_common.sh@1197 -- # return 0 00:15:11.688 17:18:07 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.255 17:18:08 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:12.255 17:18:08 -- common/autotest_common.sh@1208 -- # local i=0 00:15:12.255 17:18:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:12.255 17:18:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.255 17:18:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:12.255 17:18:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.255 17:18:08 -- common/autotest_common.sh@1220 -- # return 0 00:15:12.255 17:18:08 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:12.255 17:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.255 17:18:08 -- common/autotest_common.sh@10 -- # set +x 00:15:12.255 17:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.255 17:18:08 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:12.255 17:18:08 -- common/autotest_common.sh@650 -- # local es=0 00:15:12.255 17:18:08 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:12.255 17:18:08 -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:12.255 17:18:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.255 17:18:08 -- common/autotest_common.sh@642 -- # type -t nvme 00:15:12.255 17:18:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.255 17:18:08 -- common/autotest_common.sh@644 -- # type -P nvme 00:15:12.255 17:18:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.255 17:18:08 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:12.255 17:18:08 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:12.255 17:18:08 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:12.255 [2024-12-14 17:18:08.843792] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:15:12.255 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:12.255 could not add new controller: failed to write to nvme-fabrics device 00:15:12.255 17:18:08 -- common/autotest_common.sh@653 -- # es=1 00:15:12.255 17:18:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:12.255 17:18:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:12.255 17:18:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:12.255 17:18:08 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:12.255 17:18:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.255 17:18:08 -- common/autotest_common.sh@10 -- # set +x 00:15:12.255 17:18:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.255 17:18:08 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:13.191 17:18:09 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:13.191 17:18:09 -- common/autotest_common.sh@1187 -- # local i=0 00:15:13.191 17:18:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.191 17:18:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:13.191 17:18:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:15.725 17:18:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:15.725 17:18:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:15.725 17:18:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.725 17:18:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:15.725 17:18:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.725 17:18:11 -- common/autotest_common.sh@1197 -- # return 0 00:15:15.725 17:18:11 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.292 17:18:12 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.292 17:18:12 -- common/autotest_common.sh@1208 -- # local i=0 00:15:16.292 17:18:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:16.292 17:18:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.292 17:18:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:16.292 17:18:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.293 17:18:12 -- common/autotest_common.sh@1220 -- # return 0 00:15:16.293 17:18:12 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.293 17:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.293 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:15:16.293 17:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.293 17:18:12 -- target/rpc.sh@81 -- # seq 1 5 00:15:16.293 17:18:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:16.293 17:18:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:16.293 17:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.293 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:15:16.293 17:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.293 17:18:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:16.293 17:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.293 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:15:16.293 [2024-12-14 17:18:12.912318] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:16.293 17:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.293 17:18:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:16.293 17:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.293 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:15:16.293 17:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.293 17:18:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:16.293 17:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.293 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:15:16.293 17:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.293 17:18:12 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:17.229 17:18:13 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:17.229 17:18:13 -- common/autotest_common.sh@1187 -- # local i=0 00:15:17.229 17:18:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:17.229 17:18:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:17.229 17:18:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:19.761 17:18:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:19.761 17:18:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:19.761 17:18:15 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:19.761 17:18:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:19.761 17:18:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:19.761 17:18:15 -- common/autotest_common.sh@1197 -- # return 0 00:15:19.761 17:18:15 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.329 17:18:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:20.329 17:18:16 -- common/autotest_common.sh@1208 -- # local i=0 00:15:20.329 17:18:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:20.329 17:18:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.329 17:18:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:20.329 17:18:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.329 17:18:16 -- common/autotest_common.sh@1220 -- # return 0 00:15:20.329 17:18:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:20.329 17:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.329 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:20.329 17:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.329 17:18:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.329 17:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.329 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:20.329 17:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.329 17:18:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:20.329 17:18:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.329 17:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.329 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:20.329 17:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.329 17:18:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:20.329 17:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.329 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:20.329 [2024-12-14 17:18:16.952993] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:20.329 17:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.329 17:18:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:20.329 17:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.329 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:20.329 17:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.329 17:18:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.329 17:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.329 17:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:20.329 17:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.329 17:18:16 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:21.265 17:18:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:21.265 17:18:17 -- common/autotest_common.sh@1187 -- # local i=0 00:15:21.524 17:18:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:21.524 17:18:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:21.524 17:18:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:23.427 17:18:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:23.427 17:18:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:23.427 17:18:19 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:23.427 17:18:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:23.427 17:18:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:23.427 17:18:19 -- common/autotest_common.sh@1197 -- # return 0 00:15:23.427 17:18:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:24.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.363 17:18:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:24.363 17:18:20 -- common/autotest_common.sh@1208 -- # local i=0 00:15:24.363 17:18:20 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:24.363 17:18:20 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.363 17:18:20 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:24.363 17:18:20 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.363 17:18:20 -- common/autotest_common.sh@1220 -- # return 0 00:15:24.363 17:18:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:24.363 17:18:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.363 17:18:20 -- common/autotest_common.sh@10 -- # set +x 00:15:24.363 17:18:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.363 17:18:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.363 17:18:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.363 17:18:20 -- common/autotest_common.sh@10 -- # set +x 00:15:24.363 17:18:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.363 17:18:21 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:24.363 17:18:21 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:24.363 17:18:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.364 17:18:21 -- common/autotest_common.sh@10 -- # set +x 00:15:24.364 17:18:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.364 17:18:21 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:24.364 17:18:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.364 17:18:21 -- common/autotest_common.sh@10 -- # set +x 00:15:24.364 [2024-12-14 17:18:21.014156] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:24.364 17:18:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.364 17:18:21 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:24.364 17:18:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.364 17:18:21 -- common/autotest_common.sh@10 -- # set +x 00:15:24.364 17:18:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.364 17:18:21 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:24.364 17:18:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.364 17:18:21 -- common/autotest_common.sh@10 -- # set +x 00:15:24.364 17:18:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.364 17:18:21 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:25.740 17:18:22 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:25.740 17:18:22 -- common/autotest_common.sh@1187 -- # local i=0 00:15:25.740 17:18:22 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.740 17:18:22 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:25.740 17:18:22 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:27.643 17:18:24 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:27.643 17:18:24 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:27.643 17:18:24 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.643 17:18:24 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:27.643 17:18:24 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.643 17:18:24 -- common/autotest_common.sh@1197 -- # return 0 00:15:27.643 17:18:24 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:28.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.581 17:18:24 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:28.581 17:18:24 -- common/autotest_common.sh@1208 -- # local i=0 00:15:28.581 17:18:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:28.581 17:18:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:28.581 17:18:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:28.581 17:18:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:28.581 17:18:25 -- common/autotest_common.sh@1220 -- # return 0 00:15:28.581 17:18:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:28.581 17:18:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.581 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:15:28.581 17:18:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.581 17:18:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.581 17:18:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.581 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:15:28.581 17:18:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.581 17:18:25 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:28.581 17:18:25 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:28.581 17:18:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.581 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:15:28.581 17:18:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.581 17:18:25 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:28.581 17:18:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.581 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:15:28.581 [2024-12-14 17:18:25.054436] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:28.581 17:18:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.581 17:18:25 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:28.581 17:18:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.581 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:15:28.581 17:18:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.581 17:18:25 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:28.581 17:18:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.581 17:18:25 -- common/autotest_common.sh@10 -- # set +x 00:15:28.581 17:18:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.581 17:18:25 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:29.516 17:18:26 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:29.516 17:18:26 -- common/autotest_common.sh@1187 -- # local i=0 00:15:29.516 17:18:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:29.516 17:18:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:29.516 17:18:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:31.422 17:18:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:31.422 17:18:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:31.422 17:18:28 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:31.422 17:18:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:31.422 17:18:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:31.422 17:18:28 -- common/autotest_common.sh@1197 -- # return 0 00:15:31.422 17:18:28 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.866 17:18:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:32.866 17:18:29 -- common/autotest_common.sh@1208 -- # local i=0 00:15:32.866 17:18:29 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:32.866 17:18:29 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.866 17:18:29 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:32.866 17:18:29 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.866 17:18:29 -- common/autotest_common.sh@1220 -- # return 0 00:15:32.866 17:18:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:32.866 17:18:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.866 17:18:29 -- common/autotest_common.sh@10 -- # set +x 00:15:32.866 17:18:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.866 17:18:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.866 17:18:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.866 17:18:29 -- common/autotest_common.sh@10 -- # set +x 00:15:32.866 17:18:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.866 17:18:29 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:32.866 17:18:29 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:32.866 17:18:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.866 17:18:29 -- common/autotest_common.sh@10 -- # set +x 00:15:32.866 17:18:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.866 17:18:29 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:32.866 17:18:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.866 17:18:29 -- common/autotest_common.sh@10 -- # set +x 00:15:32.866 [2024-12-14 17:18:29.124579] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:32.866 17:18:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.866 17:18:29 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:32.866 17:18:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.866 17:18:29 -- common/autotest_common.sh@10 -- # set +x 00:15:32.866 17:18:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.866 17:18:29 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:32.866 17:18:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.866 17:18:29 -- common/autotest_common.sh@10 -- # set +x 00:15:32.866 17:18:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.866 17:18:29 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:33.458 17:18:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:33.458 17:18:30 -- common/autotest_common.sh@1187 -- # local i=0 00:15:33.458 17:18:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:33.458 17:18:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:33.458 17:18:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:35.991 17:18:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:35.991 17:18:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:35.991 17:18:32 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.991 17:18:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:35.991 17:18:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.991 17:18:32 -- common/autotest_common.sh@1197 -- # return 0 00:15:35.991 17:18:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:36.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.558 17:18:33 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:36.558 17:18:33 -- common/autotest_common.sh@1208 -- # local i=0 00:15:36.558 17:18:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:36.558 17:18:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:36.558 17:18:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:36.559 17:18:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:36.559 17:18:33 -- common/autotest_common.sh@1220 -- # return 0 00:15:36.559 17:18:33 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:36.559 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.559 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.559 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.559 17:18:33 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.559 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.559 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.559 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.559 17:18:33 -- target/rpc.sh@99 -- # seq 1 5 00:15:36.559 17:18:33 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.559 17:18:33 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.559 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.559 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.559 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.559 17:18:33 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:36.559 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.559 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.559 [2024-12-14 17:18:33.183232] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:36.559 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.559 17:18:33 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.559 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.559 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.559 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.559 17:18:33 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.559 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.559 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.559 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.559 17:18:33 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.559 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.559 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.559 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.559 17:18:33 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.559 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.559 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.559 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.559 17:18:33 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.559 17:18:33 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.559 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.559 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.559 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.559 17:18:33 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:36.559 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.559 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.559 [2024-12-14 17:18:33.231439] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:36.559 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.559 17:18:33 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.559 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.559 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.818 17:18:33 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 [2024-12-14 17:18:33.279604] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.818 17:18:33 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 [2024-12-14 17:18:33.327797] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.818 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.818 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.818 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.818 17:18:33 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.818 17:18:33 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.819 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.819 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.819 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.819 17:18:33 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:36.819 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.819 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.819 [2024-12-14 17:18:33.375971] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:36.819 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.819 17:18:33 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.819 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.819 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.819 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.819 17:18:33 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.819 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.819 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.819 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.819 17:18:33 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.819 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.819 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.819 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.819 17:18:33 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.819 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.819 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.819 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.819 17:18:33 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:36.819 17:18:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.819 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:36.819 17:18:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.819 17:18:33 -- target/rpc.sh@110 -- # stats='{ 00:15:36.819 "tick_rate": 2500000000, 00:15:36.819 "poll_groups": [ 00:15:36.819 { 00:15:36.819 "name": "nvmf_tgt_poll_group_0", 00:15:36.819 "admin_qpairs": 2, 00:15:36.819 "io_qpairs": 27, 00:15:36.819 "current_admin_qpairs": 0, 00:15:36.819 "current_io_qpairs": 0, 00:15:36.819 "pending_bdev_io": 0, 00:15:36.819 "completed_nvme_io": 84, 00:15:36.819 "transports": [ 00:15:36.819 { 00:15:36.819 "trtype": "RDMA", 00:15:36.819 "pending_data_buffer": 0, 00:15:36.819 "devices": [ 00:15:36.819 { 00:15:36.819 "name": "mlx5_0", 00:15:36.819 "polls": 3486263, 00:15:36.819 "idle_polls": 3486010, 00:15:36.819 "completions": 277, 00:15:36.819 "requests": 138, 00:15:36.819 "request_latency": 23738594, 00:15:36.819 "pending_free_request": 0, 00:15:36.819 "pending_rdma_read": 0, 00:15:36.819 "pending_rdma_write": 0, 00:15:36.819 "pending_rdma_send": 0, 00:15:36.819 "total_send_wrs": 221, 00:15:36.819 "send_doorbell_updates": 125, 00:15:36.819 "total_recv_wrs": 4234, 00:15:36.819 "recv_doorbell_updates": 125 00:15:36.819 }, 00:15:36.819 { 00:15:36.819 "name": "mlx5_1", 00:15:36.819 "polls": 3486263, 00:15:36.819 "idle_polls": 3486263, 00:15:36.819 "completions": 0, 00:15:36.819 "requests": 0, 00:15:36.819 "request_latency": 0, 00:15:36.819 "pending_free_request": 0, 00:15:36.819 "pending_rdma_read": 0, 00:15:36.819 "pending_rdma_write": 0, 00:15:36.819 "pending_rdma_send": 0, 00:15:36.819 "total_send_wrs": 0, 00:15:36.819 "send_doorbell_updates": 0, 00:15:36.819 "total_recv_wrs": 4096, 00:15:36.819 "recv_doorbell_updates": 1 00:15:36.819 } 00:15:36.819 ] 00:15:36.819 } 00:15:36.819 ] 00:15:36.819 }, 00:15:36.819 { 00:15:36.819 "name": "nvmf_tgt_poll_group_1", 00:15:36.819 "admin_qpairs": 2, 00:15:36.819 "io_qpairs": 26, 00:15:36.819 "current_admin_qpairs": 0, 00:15:36.819 "current_io_qpairs": 0, 00:15:36.819 "pending_bdev_io": 0, 00:15:36.819 "completed_nvme_io": 122, 00:15:36.819 "transports": [ 00:15:36.819 { 00:15:36.819 "trtype": "RDMA", 00:15:36.819 "pending_data_buffer": 0, 00:15:36.819 "devices": [ 00:15:36.819 { 00:15:36.819 "name": "mlx5_0", 00:15:36.819 "polls": 3433529, 00:15:36.819 "idle_polls": 3433213, 00:15:36.819 "completions": 350, 00:15:36.819 "requests": 175, 00:15:36.819 "request_latency": 32641374, 00:15:36.819 "pending_free_request": 0, 00:15:36.819 "pending_rdma_read": 0, 00:15:36.819 "pending_rdma_write": 0, 00:15:36.819 "pending_rdma_send": 0, 00:15:36.819 "total_send_wrs": 296, 00:15:36.819 "send_doorbell_updates": 156, 00:15:36.819 "total_recv_wrs": 4271, 00:15:36.819 "recv_doorbell_updates": 157 00:15:36.819 }, 00:15:36.819 { 00:15:36.819 "name": "mlx5_1", 00:15:36.819 "polls": 3433529, 00:15:36.819 "idle_polls": 3433529, 00:15:36.819 "completions": 0, 00:15:36.819 "requests": 0, 00:15:36.819 "request_latency": 0, 00:15:36.819 "pending_free_request": 0, 00:15:36.819 "pending_rdma_read": 0, 00:15:36.819 "pending_rdma_write": 0, 00:15:36.819 "pending_rdma_send": 0, 00:15:36.819 "total_send_wrs": 0, 00:15:36.819 "send_doorbell_updates": 0, 00:15:36.819 "total_recv_wrs": 4096, 00:15:36.819 "recv_doorbell_updates": 1 00:15:36.819 } 00:15:36.819 ] 00:15:36.819 } 00:15:36.819 ] 00:15:36.819 }, 00:15:36.819 { 00:15:36.819 "name": "nvmf_tgt_poll_group_2", 00:15:36.819 "admin_qpairs": 1, 00:15:36.819 "io_qpairs": 26, 00:15:36.819 "current_admin_qpairs": 0, 00:15:36.819 "current_io_qpairs": 0, 00:15:36.819 "pending_bdev_io": 0, 00:15:36.819 "completed_nvme_io": 124, 00:15:36.819 "transports": [ 00:15:36.819 { 00:15:36.819 "trtype": "RDMA", 00:15:36.819 "pending_data_buffer": 0, 00:15:36.819 "devices": [ 00:15:36.819 { 00:15:36.819 "name": "mlx5_0", 00:15:36.819 "polls": 3509383, 00:15:36.819 "idle_polls": 3509118, 00:15:36.819 "completions": 303, 00:15:36.819 "requests": 151, 00:15:36.819 "request_latency": 32435788, 00:15:36.819 "pending_free_request": 0, 00:15:36.819 "pending_rdma_read": 0, 00:15:36.819 "pending_rdma_write": 0, 00:15:36.819 "pending_rdma_send": 0, 00:15:36.819 "total_send_wrs": 262, 00:15:36.819 "send_doorbell_updates": 129, 00:15:36.819 "total_recv_wrs": 4247, 00:15:36.819 "recv_doorbell_updates": 129 00:15:36.819 }, 00:15:36.819 { 00:15:36.819 "name": "mlx5_1", 00:15:36.819 "polls": 3509383, 00:15:36.819 "idle_polls": 3509383, 00:15:36.819 "completions": 0, 00:15:36.819 "requests": 0, 00:15:36.819 "request_latency": 0, 00:15:36.819 "pending_free_request": 0, 00:15:36.819 "pending_rdma_read": 0, 00:15:36.819 "pending_rdma_write": 0, 00:15:36.819 "pending_rdma_send": 0, 00:15:36.819 "total_send_wrs": 0, 00:15:36.819 "send_doorbell_updates": 0, 00:15:36.819 "total_recv_wrs": 4096, 00:15:36.819 "recv_doorbell_updates": 1 00:15:36.819 } 00:15:36.819 ] 00:15:36.819 } 00:15:36.819 ] 00:15:36.819 }, 00:15:36.819 { 00:15:36.819 "name": "nvmf_tgt_poll_group_3", 00:15:36.819 "admin_qpairs": 2, 00:15:36.819 "io_qpairs": 26, 00:15:36.819 "current_admin_qpairs": 0, 00:15:36.819 "current_io_qpairs": 0, 00:15:36.819 "pending_bdev_io": 0, 00:15:36.819 "completed_nvme_io": 125, 00:15:36.819 "transports": [ 00:15:36.819 { 00:15:36.819 "trtype": "RDMA", 00:15:36.819 "pending_data_buffer": 0, 00:15:36.819 "devices": [ 00:15:36.819 { 00:15:36.819 "name": "mlx5_0", 00:15:36.819 "polls": 2692978, 00:15:36.819 "idle_polls": 2692665, 00:15:36.819 "completions": 356, 00:15:36.819 "requests": 178, 00:15:36.819 "request_latency": 36586344, 00:15:36.819 "pending_free_request": 0, 00:15:36.819 "pending_rdma_read": 0, 00:15:36.819 "pending_rdma_write": 0, 00:15:36.819 "pending_rdma_send": 0, 00:15:36.819 "total_send_wrs": 302, 00:15:36.819 "send_doorbell_updates": 153, 00:15:36.819 "total_recv_wrs": 4274, 00:15:36.819 "recv_doorbell_updates": 154 00:15:36.819 }, 00:15:36.819 { 00:15:36.819 "name": "mlx5_1", 00:15:36.819 "polls": 2692978, 00:15:36.819 "idle_polls": 2692978, 00:15:36.819 "completions": 0, 00:15:36.819 "requests": 0, 00:15:36.819 "request_latency": 0, 00:15:36.819 "pending_free_request": 0, 00:15:36.819 "pending_rdma_read": 0, 00:15:36.819 "pending_rdma_write": 0, 00:15:36.819 "pending_rdma_send": 0, 00:15:36.819 "total_send_wrs": 0, 00:15:36.819 "send_doorbell_updates": 0, 00:15:36.819 "total_recv_wrs": 4096, 00:15:36.819 "recv_doorbell_updates": 1 00:15:36.819 } 00:15:36.819 ] 00:15:36.819 } 00:15:36.819 ] 00:15:36.819 } 00:15:36.819 ] 00:15:36.819 }' 00:15:36.819 17:18:33 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:36.819 17:18:33 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:36.819 17:18:33 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:36.819 17:18:33 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:37.078 17:18:33 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:37.078 17:18:33 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:37.078 17:18:33 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:37.078 17:18:33 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:37.078 17:18:33 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:37.078 17:18:33 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:15:37.078 17:18:33 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:15:37.078 17:18:33 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:15:37.078 17:18:33 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:15:37.078 17:18:33 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:15:37.078 17:18:33 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:37.078 17:18:33 -- target/rpc.sh@117 -- # (( 1286 > 0 )) 00:15:37.078 17:18:33 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:15:37.078 17:18:33 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:15:37.078 17:18:33 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:15:37.078 17:18:33 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:37.078 17:18:33 -- target/rpc.sh@118 -- # (( 125402100 > 0 )) 00:15:37.078 17:18:33 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:37.078 17:18:33 -- target/rpc.sh@123 -- # nvmftestfini 00:15:37.078 17:18:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:37.078 17:18:33 -- nvmf/common.sh@116 -- # sync 00:15:37.078 17:18:33 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:37.078 17:18:33 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:37.078 17:18:33 -- nvmf/common.sh@119 -- # set +e 00:15:37.078 17:18:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:37.079 17:18:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:37.079 rmmod nvme_rdma 00:15:37.079 rmmod nvme_fabrics 00:15:37.079 17:18:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:37.079 17:18:33 -- nvmf/common.sh@123 -- # set -e 00:15:37.079 17:18:33 -- nvmf/common.sh@124 -- # return 0 00:15:37.079 17:18:33 -- nvmf/common.sh@477 -- # '[' -n 1295726 ']' 00:15:37.079 17:18:33 -- nvmf/common.sh@478 -- # killprocess 1295726 00:15:37.079 17:18:33 -- common/autotest_common.sh@936 -- # '[' -z 1295726 ']' 00:15:37.079 17:18:33 -- common/autotest_common.sh@940 -- # kill -0 1295726 00:15:37.079 17:18:33 -- common/autotest_common.sh@941 -- # uname 00:15:37.079 17:18:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:37.079 17:18:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1295726 00:15:37.338 17:18:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:37.338 17:18:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:37.338 17:18:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1295726' 00:15:37.338 killing process with pid 1295726 00:15:37.338 17:18:33 -- common/autotest_common.sh@955 -- # kill 1295726 00:15:37.338 17:18:33 -- common/autotest_common.sh@960 -- # wait 1295726 00:15:37.597 17:18:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:37.597 17:18:34 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:37.597 00:15:37.597 real 0m37.649s 00:15:37.597 user 2m4.547s 00:15:37.597 sys 0m6.783s 00:15:37.597 17:18:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:37.597 17:18:34 -- common/autotest_common.sh@10 -- # set +x 00:15:37.597 ************************************ 00:15:37.597 END TEST nvmf_rpc 00:15:37.597 ************************************ 00:15:37.597 17:18:34 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:37.597 17:18:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:37.597 17:18:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:37.597 17:18:34 -- common/autotest_common.sh@10 -- # set +x 00:15:37.597 ************************************ 00:15:37.597 START TEST nvmf_invalid 00:15:37.597 ************************************ 00:15:37.597 17:18:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:37.597 * Looking for test storage... 00:15:37.597 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:37.597 17:18:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:37.597 17:18:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:37.597 17:18:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:37.597 17:18:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:37.597 17:18:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:37.597 17:18:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:37.597 17:18:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:37.597 17:18:34 -- scripts/common.sh@335 -- # IFS=.-: 00:15:37.597 17:18:34 -- scripts/common.sh@335 -- # read -ra ver1 00:15:37.597 17:18:34 -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.597 17:18:34 -- scripts/common.sh@336 -- # read -ra ver2 00:15:37.597 17:18:34 -- scripts/common.sh@337 -- # local 'op=<' 00:15:37.597 17:18:34 -- scripts/common.sh@339 -- # ver1_l=2 00:15:37.597 17:18:34 -- scripts/common.sh@340 -- # ver2_l=1 00:15:37.597 17:18:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:37.597 17:18:34 -- scripts/common.sh@343 -- # case "$op" in 00:15:37.597 17:18:34 -- scripts/common.sh@344 -- # : 1 00:15:37.597 17:18:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:37.597 17:18:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.597 17:18:34 -- scripts/common.sh@364 -- # decimal 1 00:15:37.597 17:18:34 -- scripts/common.sh@352 -- # local d=1 00:15:37.597 17:18:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.597 17:18:34 -- scripts/common.sh@354 -- # echo 1 00:15:37.597 17:18:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:37.597 17:18:34 -- scripts/common.sh@365 -- # decimal 2 00:15:37.597 17:18:34 -- scripts/common.sh@352 -- # local d=2 00:15:37.597 17:18:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.597 17:18:34 -- scripts/common.sh@354 -- # echo 2 00:15:37.597 17:18:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:37.597 17:18:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:37.597 17:18:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:37.597 17:18:34 -- scripts/common.sh@367 -- # return 0 00:15:37.597 17:18:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.597 17:18:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:37.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.597 --rc genhtml_branch_coverage=1 00:15:37.597 --rc genhtml_function_coverage=1 00:15:37.597 --rc genhtml_legend=1 00:15:37.597 --rc geninfo_all_blocks=1 00:15:37.597 --rc geninfo_unexecuted_blocks=1 00:15:37.597 00:15:37.597 ' 00:15:37.597 17:18:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:37.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.597 --rc genhtml_branch_coverage=1 00:15:37.597 --rc genhtml_function_coverage=1 00:15:37.597 --rc genhtml_legend=1 00:15:37.597 --rc geninfo_all_blocks=1 00:15:37.597 --rc geninfo_unexecuted_blocks=1 00:15:37.597 00:15:37.597 ' 00:15:37.597 17:18:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:37.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.597 --rc genhtml_branch_coverage=1 00:15:37.597 --rc genhtml_function_coverage=1 00:15:37.597 --rc genhtml_legend=1 00:15:37.597 --rc geninfo_all_blocks=1 00:15:37.597 --rc geninfo_unexecuted_blocks=1 00:15:37.597 00:15:37.597 ' 00:15:37.597 17:18:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:37.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.597 --rc genhtml_branch_coverage=1 00:15:37.597 --rc genhtml_function_coverage=1 00:15:37.597 --rc genhtml_legend=1 00:15:37.597 --rc geninfo_all_blocks=1 00:15:37.597 --rc geninfo_unexecuted_blocks=1 00:15:37.597 00:15:37.597 ' 00:15:37.597 17:18:34 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.597 17:18:34 -- nvmf/common.sh@7 -- # uname -s 00:15:37.597 17:18:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.597 17:18:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.597 17:18:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.597 17:18:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.597 17:18:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.597 17:18:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.597 17:18:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.597 17:18:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.597 17:18:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.857 17:18:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.857 17:18:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:37.857 17:18:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:37.857 17:18:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.857 17:18:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.857 17:18:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.857 17:18:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:37.857 17:18:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.857 17:18:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.857 17:18:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.857 17:18:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.857 17:18:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.857 17:18:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.857 17:18:34 -- paths/export.sh@5 -- # export PATH 00:15:37.857 17:18:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.857 17:18:34 -- nvmf/common.sh@46 -- # : 0 00:15:37.857 17:18:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:37.857 17:18:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:37.857 17:18:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:37.857 17:18:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.857 17:18:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.857 17:18:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:37.857 17:18:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:37.857 17:18:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:37.857 17:18:34 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:37.857 17:18:34 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:37.857 17:18:34 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:37.857 17:18:34 -- target/invalid.sh@14 -- # target=foobar 00:15:37.857 17:18:34 -- target/invalid.sh@16 -- # RANDOM=0 00:15:37.857 17:18:34 -- target/invalid.sh@34 -- # nvmftestinit 00:15:37.857 17:18:34 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:37.857 17:18:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.857 17:18:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:37.857 17:18:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:37.857 17:18:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:37.857 17:18:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.857 17:18:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.857 17:18:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.857 17:18:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:37.857 17:18:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:37.857 17:18:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:37.857 17:18:34 -- common/autotest_common.sh@10 -- # set +x 00:15:44.425 17:18:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:44.425 17:18:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:44.425 17:18:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:44.425 17:18:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:44.425 17:18:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:44.425 17:18:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:44.425 17:18:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:44.425 17:18:40 -- nvmf/common.sh@294 -- # net_devs=() 00:15:44.425 17:18:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:44.425 17:18:40 -- nvmf/common.sh@295 -- # e810=() 00:15:44.425 17:18:40 -- nvmf/common.sh@295 -- # local -ga e810 00:15:44.425 17:18:40 -- nvmf/common.sh@296 -- # x722=() 00:15:44.425 17:18:40 -- nvmf/common.sh@296 -- # local -ga x722 00:15:44.425 17:18:40 -- nvmf/common.sh@297 -- # mlx=() 00:15:44.425 17:18:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:44.425 17:18:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:44.425 17:18:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:44.425 17:18:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:44.425 17:18:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:44.425 17:18:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:44.425 17:18:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:44.425 17:18:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:44.425 17:18:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:44.425 17:18:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:44.425 17:18:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:44.425 17:18:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:44.425 17:18:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:44.425 17:18:40 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:44.425 17:18:40 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:44.425 17:18:40 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:44.425 17:18:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:44.425 17:18:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:44.425 17:18:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:44.425 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:44.425 17:18:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:44.425 17:18:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:44.425 17:18:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:44.425 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:44.425 17:18:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:44.425 17:18:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:44.425 17:18:40 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:44.425 17:18:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.425 17:18:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:44.425 17:18:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.425 17:18:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:44.425 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:44.425 17:18:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.425 17:18:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:44.425 17:18:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.425 17:18:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:44.425 17:18:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.425 17:18:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:44.425 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:44.425 17:18:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.425 17:18:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:44.425 17:18:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:44.425 17:18:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:44.425 17:18:40 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:44.425 17:18:40 -- nvmf/common.sh@57 -- # uname 00:15:44.425 17:18:40 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:44.425 17:18:40 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:44.425 17:18:40 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:44.425 17:18:40 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:44.425 17:18:40 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:44.425 17:18:40 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:44.425 17:18:40 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:44.425 17:18:40 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:44.425 17:18:40 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:44.425 17:18:40 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:44.425 17:18:40 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:44.425 17:18:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:44.425 17:18:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:44.425 17:18:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:44.425 17:18:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:44.425 17:18:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:44.425 17:18:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:44.425 17:18:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:44.425 17:18:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:44.425 17:18:40 -- nvmf/common.sh@104 -- # continue 2 00:15:44.425 17:18:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:44.425 17:18:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:44.425 17:18:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:44.425 17:18:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:44.425 17:18:40 -- nvmf/common.sh@104 -- # continue 2 00:15:44.425 17:18:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:44.425 17:18:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:44.425 17:18:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:44.425 17:18:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:44.425 17:18:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:44.425 17:18:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:44.425 17:18:40 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:44.425 17:18:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:44.425 17:18:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:44.426 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:44.426 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:44.426 altname enp217s0f0np0 00:15:44.426 altname ens818f0np0 00:15:44.426 inet 192.168.100.8/24 scope global mlx_0_0 00:15:44.426 valid_lft forever preferred_lft forever 00:15:44.426 17:18:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:44.426 17:18:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:44.426 17:18:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:44.426 17:18:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:44.426 17:18:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:44.426 17:18:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:44.426 17:18:40 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:44.426 17:18:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:44.426 17:18:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:44.426 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:44.426 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:44.426 altname enp217s0f1np1 00:15:44.426 altname ens818f1np1 00:15:44.426 inet 192.168.100.9/24 scope global mlx_0_1 00:15:44.426 valid_lft forever preferred_lft forever 00:15:44.426 17:18:40 -- nvmf/common.sh@410 -- # return 0 00:15:44.426 17:18:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:44.426 17:18:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:44.426 17:18:40 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:44.426 17:18:40 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:44.426 17:18:40 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:44.426 17:18:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:44.426 17:18:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:44.426 17:18:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:44.426 17:18:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:44.426 17:18:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:44.426 17:18:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:44.426 17:18:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:44.426 17:18:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:44.426 17:18:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:44.426 17:18:40 -- nvmf/common.sh@104 -- # continue 2 00:15:44.426 17:18:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:44.426 17:18:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:44.426 17:18:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:44.426 17:18:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:44.426 17:18:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:44.426 17:18:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:44.426 17:18:40 -- nvmf/common.sh@104 -- # continue 2 00:15:44.426 17:18:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:44.426 17:18:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:44.426 17:18:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:44.426 17:18:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:44.426 17:18:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:44.426 17:18:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:44.426 17:18:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:44.426 17:18:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:44.426 17:18:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:44.426 17:18:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:44.426 17:18:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:44.426 17:18:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:44.426 17:18:40 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:44.426 192.168.100.9' 00:15:44.426 17:18:40 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:44.426 192.168.100.9' 00:15:44.426 17:18:40 -- nvmf/common.sh@445 -- # head -n 1 00:15:44.426 17:18:40 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:44.426 17:18:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:44.426 192.168.100.9' 00:15:44.426 17:18:40 -- nvmf/common.sh@446 -- # tail -n +2 00:15:44.426 17:18:40 -- nvmf/common.sh@446 -- # head -n 1 00:15:44.426 17:18:40 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:44.426 17:18:40 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:44.426 17:18:40 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:44.426 17:18:40 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:44.426 17:18:40 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:44.426 17:18:40 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:44.426 17:18:40 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:44.426 17:18:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:44.426 17:18:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.426 17:18:40 -- common/autotest_common.sh@10 -- # set +x 00:15:44.426 17:18:40 -- nvmf/common.sh@469 -- # nvmfpid=1304783 00:15:44.426 17:18:40 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:44.426 17:18:40 -- nvmf/common.sh@470 -- # waitforlisten 1304783 00:15:44.426 17:18:40 -- common/autotest_common.sh@829 -- # '[' -z 1304783 ']' 00:15:44.426 17:18:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.426 17:18:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.426 17:18:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.426 17:18:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.426 17:18:40 -- common/autotest_common.sh@10 -- # set +x 00:15:44.426 [2024-12-14 17:18:40.875183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:44.426 [2024-12-14 17:18:40.875233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.426 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.426 [2024-12-14 17:18:40.945332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.426 [2024-12-14 17:18:40.982915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:44.426 [2024-12-14 17:18:40.983025] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.426 [2024-12-14 17:18:40.983039] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.426 [2024-12-14 17:18:40.983048] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.426 [2024-12-14 17:18:40.983093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.426 [2024-12-14 17:18:40.983207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.426 [2024-12-14 17:18:40.983279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.426 [2024-12-14 17:18:40.983280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.362 17:18:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.362 17:18:41 -- common/autotest_common.sh@862 -- # return 0 00:15:45.362 17:18:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:45.362 17:18:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.362 17:18:41 -- common/autotest_common.sh@10 -- # set +x 00:15:45.362 17:18:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.362 17:18:41 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:45.362 17:18:41 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode32097 00:15:45.362 [2024-12-14 17:18:41.910444] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:45.362 17:18:41 -- target/invalid.sh@40 -- # out='request: 00:15:45.362 { 00:15:45.362 "nqn": "nqn.2016-06.io.spdk:cnode32097", 00:15:45.362 "tgt_name": "foobar", 00:15:45.362 "method": "nvmf_create_subsystem", 00:15:45.362 "req_id": 1 00:15:45.362 } 00:15:45.362 Got JSON-RPC error response 00:15:45.362 response: 00:15:45.362 { 00:15:45.362 "code": -32603, 00:15:45.362 "message": "Unable to find target foobar" 00:15:45.362 }' 00:15:45.362 17:18:41 -- target/invalid.sh@41 -- # [[ request: 00:15:45.362 { 00:15:45.362 "nqn": "nqn.2016-06.io.spdk:cnode32097", 00:15:45.362 "tgt_name": "foobar", 00:15:45.362 "method": "nvmf_create_subsystem", 00:15:45.362 "req_id": 1 00:15:45.362 } 00:15:45.362 Got JSON-RPC error response 00:15:45.362 response: 00:15:45.362 { 00:15:45.362 "code": -32603, 00:15:45.362 "message": "Unable to find target foobar" 00:15:45.362 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:45.362 17:18:41 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:45.362 17:18:41 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2332 00:15:45.622 [2024-12-14 17:18:42.099135] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2332: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:45.622 17:18:42 -- target/invalid.sh@45 -- # out='request: 00:15:45.622 { 00:15:45.622 "nqn": "nqn.2016-06.io.spdk:cnode2332", 00:15:45.622 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:45.622 "method": "nvmf_create_subsystem", 00:15:45.622 "req_id": 1 00:15:45.622 } 00:15:45.622 Got JSON-RPC error response 00:15:45.622 response: 00:15:45.622 { 00:15:45.622 "code": -32602, 00:15:45.622 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:45.622 }' 00:15:45.622 17:18:42 -- target/invalid.sh@46 -- # [[ request: 00:15:45.622 { 00:15:45.622 "nqn": "nqn.2016-06.io.spdk:cnode2332", 00:15:45.622 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:45.622 "method": "nvmf_create_subsystem", 00:15:45.622 "req_id": 1 00:15:45.622 } 00:15:45.622 Got JSON-RPC error response 00:15:45.622 response: 00:15:45.622 { 00:15:45.622 "code": -32602, 00:15:45.622 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:45.622 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:45.622 17:18:42 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:45.622 17:18:42 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20451 00:15:45.622 [2024-12-14 17:18:42.287737] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20451: invalid model number 'SPDK_Controller' 00:15:45.881 17:18:42 -- target/invalid.sh@50 -- # out='request: 00:15:45.881 { 00:15:45.881 "nqn": "nqn.2016-06.io.spdk:cnode20451", 00:15:45.881 "model_number": "SPDK_Controller\u001f", 00:15:45.881 "method": "nvmf_create_subsystem", 00:15:45.881 "req_id": 1 00:15:45.881 } 00:15:45.881 Got JSON-RPC error response 00:15:45.881 response: 00:15:45.881 { 00:15:45.881 "code": -32602, 00:15:45.881 "message": "Invalid MN SPDK_Controller\u001f" 00:15:45.881 }' 00:15:45.881 17:18:42 -- target/invalid.sh@51 -- # [[ request: 00:15:45.881 { 00:15:45.881 "nqn": "nqn.2016-06.io.spdk:cnode20451", 00:15:45.881 "model_number": "SPDK_Controller\u001f", 00:15:45.881 "method": "nvmf_create_subsystem", 00:15:45.881 "req_id": 1 00:15:45.881 } 00:15:45.881 Got JSON-RPC error response 00:15:45.881 response: 00:15:45.881 { 00:15:45.881 "code": -32602, 00:15:45.881 "message": "Invalid MN SPDK_Controller\u001f" 00:15:45.881 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:45.881 17:18:42 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:45.881 17:18:42 -- target/invalid.sh@19 -- # local length=21 ll 00:15:45.881 17:18:42 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:45.881 17:18:42 -- target/invalid.sh@21 -- # local chars 00:15:45.881 17:18:42 -- target/invalid.sh@22 -- # local string 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # printf %x 126 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # string+='~' 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # printf %x 71 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # string+=G 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # printf %x 68 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # string+=D 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # printf %x 51 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # string+=3 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # printf %x 86 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # string+=V 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # printf %x 32 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # string+=' ' 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # printf %x 64 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # string+=@ 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # printf %x 116 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # string+=t 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # printf %x 108 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # string+=l 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # printf %x 35 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # string+='#' 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # printf %x 109 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # string+=m 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # printf %x 33 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:45.881 17:18:42 -- target/invalid.sh@25 -- # string+='!' 00:15:45.881 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # printf %x 56 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # string+=8 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # printf %x 72 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # string+=H 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # printf %x 32 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # string+=' ' 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # printf %x 115 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # string+=s 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # printf %x 50 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # string+=2 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # printf %x 116 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # string+=t 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # printf %x 47 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # string+=/ 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # printf %x 51 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # string+=3 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # printf %x 63 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:45.882 17:18:42 -- target/invalid.sh@25 -- # string+='?' 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.882 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.882 17:18:42 -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:15:45.882 17:18:42 -- target/invalid.sh@31 -- # echo '~GD3V @tl#m!8H s2t/3?' 00:15:45.882 17:18:42 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '~GD3V @tl#m!8H s2t/3?' nqn.2016-06.io.spdk:cnode28059 00:15:46.141 [2024-12-14 17:18:42.636954] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28059: invalid serial number '~GD3V @tl#m!8H s2t/3?' 00:15:46.141 17:18:42 -- target/invalid.sh@54 -- # out='request: 00:15:46.141 { 00:15:46.141 "nqn": "nqn.2016-06.io.spdk:cnode28059", 00:15:46.141 "serial_number": "~GD3V @tl#m!8H s2t/3?", 00:15:46.141 "method": "nvmf_create_subsystem", 00:15:46.141 "req_id": 1 00:15:46.141 } 00:15:46.141 Got JSON-RPC error response 00:15:46.141 response: 00:15:46.141 { 00:15:46.141 "code": -32602, 00:15:46.141 "message": "Invalid SN ~GD3V @tl#m!8H s2t/3?" 00:15:46.141 }' 00:15:46.141 17:18:42 -- target/invalid.sh@55 -- # [[ request: 00:15:46.141 { 00:15:46.141 "nqn": "nqn.2016-06.io.spdk:cnode28059", 00:15:46.141 "serial_number": "~GD3V @tl#m!8H s2t/3?", 00:15:46.141 "method": "nvmf_create_subsystem", 00:15:46.141 "req_id": 1 00:15:46.141 } 00:15:46.141 Got JSON-RPC error response 00:15:46.141 response: 00:15:46.141 { 00:15:46.141 "code": -32602, 00:15:46.141 "message": "Invalid SN ~GD3V @tl#m!8H s2t/3?" 00:15:46.142 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:46.142 17:18:42 -- target/invalid.sh@58 -- # gen_random_s 41 00:15:46.142 17:18:42 -- target/invalid.sh@19 -- # local length=41 ll 00:15:46.142 17:18:42 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:46.142 17:18:42 -- target/invalid.sh@21 -- # local chars 00:15:46.142 17:18:42 -- target/invalid.sh@22 -- # local string 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 96 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+='`' 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 53 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=5 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 73 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=I 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 127 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=$'\177' 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 60 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+='<' 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 75 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=K 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 89 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=Y 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 50 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=2 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 35 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+='#' 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 101 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=e 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 93 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=']' 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 91 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+='[' 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 88 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=X 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 120 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=x 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 106 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=j 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 86 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=V 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 53 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=5 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 88 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+=X 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 35 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+='#' 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # printf %x 35 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:46.142 17:18:42 -- target/invalid.sh@25 -- # string+='#' 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.142 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # printf %x 94 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # string+='^' 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # printf %x 111 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # string+=o 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # printf %x 84 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # string+=T 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # printf %x 67 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # string+=C 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # printf %x 36 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # string+='$' 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # printf %x 107 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # string+=k 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # printf %x 49 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # string+=1 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # printf %x 55 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # string+=7 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # printf %x 52 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # string+=4 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # printf %x 100 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # string+=d 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # printf %x 95 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # string+=_ 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # printf %x 113 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:46.401 17:18:42 -- target/invalid.sh@25 -- # string+=q 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.401 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # printf %x 123 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # string+='{' 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # printf %x 75 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # string+=K 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # printf %x 113 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # string+=q 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # printf %x 101 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # string+=e 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # printf %x 72 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # string+=H 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # printf %x 91 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # string+='[' 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # printf %x 61 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # string+== 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # printf %x 60 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # string+='<' 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # printf %x 76 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:46.402 17:18:42 -- target/invalid.sh@25 -- # string+=L 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:46.402 17:18:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:46.402 17:18:42 -- target/invalid.sh@28 -- # [[ ` == \- ]] 00:15:46.402 17:18:42 -- target/invalid.sh@31 -- # echo '`5I ver2_l ? ver1_l : ver2_l) )) 00:15:48.992 17:18:45 -- scripts/common.sh@364 -- # decimal 1 00:15:48.992 17:18:45 -- scripts/common.sh@352 -- # local d=1 00:15:48.992 17:18:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.992 17:18:45 -- scripts/common.sh@354 -- # echo 1 00:15:48.992 17:18:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:48.992 17:18:45 -- scripts/common.sh@365 -- # decimal 2 00:15:48.992 17:18:45 -- scripts/common.sh@352 -- # local d=2 00:15:48.992 17:18:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.992 17:18:45 -- scripts/common.sh@354 -- # echo 2 00:15:48.992 17:18:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:48.992 17:18:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:48.992 17:18:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:48.992 17:18:45 -- scripts/common.sh@367 -- # return 0 00:15:48.992 17:18:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.992 17:18:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:48.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.992 --rc genhtml_branch_coverage=1 00:15:48.992 --rc genhtml_function_coverage=1 00:15:48.992 --rc genhtml_legend=1 00:15:48.992 --rc geninfo_all_blocks=1 00:15:48.992 --rc geninfo_unexecuted_blocks=1 00:15:48.992 00:15:48.992 ' 00:15:48.992 17:18:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:48.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.992 --rc genhtml_branch_coverage=1 00:15:48.992 --rc genhtml_function_coverage=1 00:15:48.992 --rc genhtml_legend=1 00:15:48.992 --rc geninfo_all_blocks=1 00:15:48.992 --rc geninfo_unexecuted_blocks=1 00:15:48.992 00:15:48.992 ' 00:15:48.992 17:18:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:48.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.992 --rc genhtml_branch_coverage=1 00:15:48.992 --rc genhtml_function_coverage=1 00:15:48.992 --rc genhtml_legend=1 00:15:48.992 --rc geninfo_all_blocks=1 00:15:48.992 --rc geninfo_unexecuted_blocks=1 00:15:48.992 00:15:48.992 ' 00:15:48.992 17:18:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:48.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.992 --rc genhtml_branch_coverage=1 00:15:48.992 --rc genhtml_function_coverage=1 00:15:48.992 --rc genhtml_legend=1 00:15:48.992 --rc geninfo_all_blocks=1 00:15:48.992 --rc geninfo_unexecuted_blocks=1 00:15:48.992 00:15:48.992 ' 00:15:48.992 17:18:45 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.992 17:18:45 -- nvmf/common.sh@7 -- # uname -s 00:15:48.992 17:18:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.992 17:18:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.992 17:18:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.992 17:18:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.992 17:18:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.992 17:18:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.992 17:18:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.992 17:18:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.992 17:18:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.992 17:18:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.992 17:18:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:48.992 17:18:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:48.992 17:18:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.992 17:18:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.992 17:18:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.992 17:18:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:48.992 17:18:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.992 17:18:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.992 17:18:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.992 17:18:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.992 17:18:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.992 17:18:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.992 17:18:45 -- paths/export.sh@5 -- # export PATH 00:15:48.992 17:18:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.992 17:18:45 -- nvmf/common.sh@46 -- # : 0 00:15:48.992 17:18:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:48.992 17:18:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:48.992 17:18:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:48.992 17:18:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.993 17:18:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.993 17:18:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:48.993 17:18:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:48.993 17:18:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:48.993 17:18:45 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.993 17:18:45 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:48.993 17:18:45 -- target/abort.sh@14 -- # nvmftestinit 00:15:48.993 17:18:45 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:48.993 17:18:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.993 17:18:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:48.993 17:18:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:48.993 17:18:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:48.993 17:18:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.993 17:18:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.993 17:18:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.993 17:18:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:48.993 17:18:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:48.993 17:18:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:48.993 17:18:45 -- common/autotest_common.sh@10 -- # set +x 00:15:55.562 17:18:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:55.562 17:18:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:55.562 17:18:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:55.562 17:18:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:55.562 17:18:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:55.562 17:18:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:55.562 17:18:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:55.562 17:18:52 -- nvmf/common.sh@294 -- # net_devs=() 00:15:55.562 17:18:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:55.562 17:18:52 -- nvmf/common.sh@295 -- # e810=() 00:15:55.562 17:18:52 -- nvmf/common.sh@295 -- # local -ga e810 00:15:55.562 17:18:52 -- nvmf/common.sh@296 -- # x722=() 00:15:55.562 17:18:52 -- nvmf/common.sh@296 -- # local -ga x722 00:15:55.562 17:18:52 -- nvmf/common.sh@297 -- # mlx=() 00:15:55.562 17:18:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:55.562 17:18:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.562 17:18:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.562 17:18:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.562 17:18:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.562 17:18:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.562 17:18:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.562 17:18:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.562 17:18:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.562 17:18:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.562 17:18:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.562 17:18:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.562 17:18:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:55.562 17:18:52 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:55.562 17:18:52 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:55.562 17:18:52 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:55.562 17:18:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:55.562 17:18:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:55.562 17:18:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:55.562 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:55.562 17:18:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:55.562 17:18:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:55.562 17:18:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:55.562 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:55.562 17:18:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:55.562 17:18:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:55.562 17:18:52 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:55.562 17:18:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.562 17:18:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:55.562 17:18:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.562 17:18:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:55.562 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:55.562 17:18:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.562 17:18:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:55.562 17:18:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.562 17:18:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:55.562 17:18:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.562 17:18:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:55.562 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:55.562 17:18:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.562 17:18:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:55.562 17:18:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:55.562 17:18:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:55.562 17:18:52 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:55.562 17:18:52 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:55.562 17:18:52 -- nvmf/common.sh@57 -- # uname 00:15:55.562 17:18:52 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:55.562 17:18:52 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:55.562 17:18:52 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:55.562 17:18:52 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:55.562 17:18:52 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:55.562 17:18:52 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:55.562 17:18:52 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:55.562 17:18:52 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:55.822 17:18:52 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:55.822 17:18:52 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:55.822 17:18:52 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:55.822 17:18:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:55.822 17:18:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:55.822 17:18:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:55.822 17:18:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:55.822 17:18:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:55.822 17:18:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:55.822 17:18:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.822 17:18:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:55.822 17:18:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:55.822 17:18:52 -- nvmf/common.sh@104 -- # continue 2 00:15:55.822 17:18:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:55.822 17:18:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.822 17:18:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:55.822 17:18:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.822 17:18:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:55.822 17:18:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:55.822 17:18:52 -- nvmf/common.sh@104 -- # continue 2 00:15:55.822 17:18:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:55.822 17:18:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:55.822 17:18:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:55.822 17:18:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:55.822 17:18:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:55.822 17:18:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:55.822 17:18:52 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:55.822 17:18:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:55.822 17:18:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:55.822 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:55.822 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:55.822 altname enp217s0f0np0 00:15:55.822 altname ens818f0np0 00:15:55.822 inet 192.168.100.8/24 scope global mlx_0_0 00:15:55.822 valid_lft forever preferred_lft forever 00:15:55.822 17:18:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:55.822 17:18:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:55.822 17:18:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:55.822 17:18:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:55.822 17:18:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:55.822 17:18:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:55.822 17:18:52 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:55.822 17:18:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:55.822 17:18:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:55.822 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:55.822 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:55.822 altname enp217s0f1np1 00:15:55.822 altname ens818f1np1 00:15:55.822 inet 192.168.100.9/24 scope global mlx_0_1 00:15:55.822 valid_lft forever preferred_lft forever 00:15:55.822 17:18:52 -- nvmf/common.sh@410 -- # return 0 00:15:55.822 17:18:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:55.822 17:18:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:55.822 17:18:52 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:55.822 17:18:52 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:55.822 17:18:52 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:55.822 17:18:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:55.822 17:18:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:55.822 17:18:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:55.822 17:18:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:55.822 17:18:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:55.822 17:18:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:55.822 17:18:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.822 17:18:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:55.822 17:18:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:55.822 17:18:52 -- nvmf/common.sh@104 -- # continue 2 00:15:55.822 17:18:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:55.822 17:18:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.822 17:18:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:55.822 17:18:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.822 17:18:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:55.822 17:18:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:55.822 17:18:52 -- nvmf/common.sh@104 -- # continue 2 00:15:55.822 17:18:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:55.822 17:18:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:55.822 17:18:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:55.822 17:18:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:55.822 17:18:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:55.822 17:18:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:55.822 17:18:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:55.822 17:18:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:55.822 17:18:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:55.822 17:18:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:55.822 17:18:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:55.822 17:18:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:55.822 17:18:52 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:55.822 192.168.100.9' 00:15:55.822 17:18:52 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:55.822 192.168.100.9' 00:15:55.822 17:18:52 -- nvmf/common.sh@445 -- # head -n 1 00:15:55.822 17:18:52 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:55.822 17:18:52 -- nvmf/common.sh@446 -- # head -n 1 00:15:55.822 17:18:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:55.822 192.168.100.9' 00:15:55.822 17:18:52 -- nvmf/common.sh@446 -- # tail -n +2 00:15:55.822 17:18:52 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:55.822 17:18:52 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:55.822 17:18:52 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:55.822 17:18:52 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:55.822 17:18:52 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:55.822 17:18:52 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:55.822 17:18:52 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:55.822 17:18:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:55.822 17:18:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:55.822 17:18:52 -- common/autotest_common.sh@10 -- # set +x 00:15:55.822 17:18:52 -- nvmf/common.sh@469 -- # nvmfpid=1309191 00:15:55.823 17:18:52 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:55.823 17:18:52 -- nvmf/common.sh@470 -- # waitforlisten 1309191 00:15:55.823 17:18:52 -- common/autotest_common.sh@829 -- # '[' -z 1309191 ']' 00:15:55.823 17:18:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.823 17:18:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.823 17:18:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.823 17:18:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.823 17:18:52 -- common/autotest_common.sh@10 -- # set +x 00:15:55.823 [2024-12-14 17:18:52.491335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:55.823 [2024-12-14 17:18:52.491386] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.082 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.082 [2024-12-14 17:18:52.562752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:56.082 [2024-12-14 17:18:52.598618] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:56.082 [2024-12-14 17:18:52.598737] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.082 [2024-12-14 17:18:52.598747] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.082 [2024-12-14 17:18:52.598756] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.082 [2024-12-14 17:18:52.598860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.082 [2024-12-14 17:18:52.598942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.082 [2024-12-14 17:18:52.598943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.650 17:18:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.650 17:18:53 -- common/autotest_common.sh@862 -- # return 0 00:15:56.650 17:18:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:56.650 17:18:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:56.650 17:18:53 -- common/autotest_common.sh@10 -- # set +x 00:15:56.909 17:18:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.909 17:18:53 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:15:56.909 17:18:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.909 17:18:53 -- common/autotest_common.sh@10 -- # set +x 00:15:56.909 [2024-12-14 17:18:53.378753] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1836920/0x183add0) succeed. 00:15:56.909 [2024-12-14 17:18:53.387838] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1837e20/0x187c470) succeed. 00:15:56.909 17:18:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.909 17:18:53 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:56.909 17:18:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.909 17:18:53 -- common/autotest_common.sh@10 -- # set +x 00:15:56.909 Malloc0 00:15:56.909 17:18:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.909 17:18:53 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:56.909 17:18:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.909 17:18:53 -- common/autotest_common.sh@10 -- # set +x 00:15:56.909 Delay0 00:15:56.909 17:18:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.909 17:18:53 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:56.909 17:18:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.909 17:18:53 -- common/autotest_common.sh@10 -- # set +x 00:15:56.909 17:18:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.909 17:18:53 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:56.909 17:18:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.909 17:18:53 -- common/autotest_common.sh@10 -- # set +x 00:15:56.909 17:18:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.909 17:18:53 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:56.909 17:18:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.909 17:18:53 -- common/autotest_common.sh@10 -- # set +x 00:15:56.909 [2024-12-14 17:18:53.548857] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:56.909 17:18:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.909 17:18:53 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:56.909 17:18:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.909 17:18:53 -- common/autotest_common.sh@10 -- # set +x 00:15:56.909 17:18:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.909 17:18:53 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:57.168 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.168 [2024-12-14 17:18:53.641600] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:59.078 Initializing NVMe Controllers 00:15:59.078 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:59.078 controller IO queue size 128 less than required 00:15:59.078 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:59.078 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:59.078 Initialization complete. Launching workers. 00:15:59.078 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51789 00:15:59.078 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51850, failed to submit 62 00:15:59.078 success 51789, unsuccess 61, failed 0 00:15:59.078 17:18:55 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:59.078 17:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.078 17:18:55 -- common/autotest_common.sh@10 -- # set +x 00:15:59.078 17:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.078 17:18:55 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:59.078 17:18:55 -- target/abort.sh@38 -- # nvmftestfini 00:15:59.078 17:18:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:59.078 17:18:55 -- nvmf/common.sh@116 -- # sync 00:15:59.337 17:18:55 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:59.337 17:18:55 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:59.337 17:18:55 -- nvmf/common.sh@119 -- # set +e 00:15:59.337 17:18:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:59.337 17:18:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:59.337 rmmod nvme_rdma 00:15:59.337 rmmod nvme_fabrics 00:15:59.337 17:18:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:59.337 17:18:55 -- nvmf/common.sh@123 -- # set -e 00:15:59.337 17:18:55 -- nvmf/common.sh@124 -- # return 0 00:15:59.337 17:18:55 -- nvmf/common.sh@477 -- # '[' -n 1309191 ']' 00:15:59.337 17:18:55 -- nvmf/common.sh@478 -- # killprocess 1309191 00:15:59.337 17:18:55 -- common/autotest_common.sh@936 -- # '[' -z 1309191 ']' 00:15:59.337 17:18:55 -- common/autotest_common.sh@940 -- # kill -0 1309191 00:15:59.337 17:18:55 -- common/autotest_common.sh@941 -- # uname 00:15:59.337 17:18:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:59.337 17:18:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1309191 00:15:59.337 17:18:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:59.337 17:18:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:59.337 17:18:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1309191' 00:15:59.337 killing process with pid 1309191 00:15:59.337 17:18:55 -- common/autotest_common.sh@955 -- # kill 1309191 00:15:59.337 17:18:55 -- common/autotest_common.sh@960 -- # wait 1309191 00:15:59.596 17:18:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:59.596 17:18:56 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:59.596 00:15:59.596 real 0m10.757s 00:15:59.596 user 0m14.608s 00:15:59.596 sys 0m5.791s 00:15:59.596 17:18:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:59.596 17:18:56 -- common/autotest_common.sh@10 -- # set +x 00:15:59.596 ************************************ 00:15:59.596 END TEST nvmf_abort 00:15:59.596 ************************************ 00:15:59.596 17:18:56 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:59.596 17:18:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:59.596 17:18:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:59.596 17:18:56 -- common/autotest_common.sh@10 -- # set +x 00:15:59.596 ************************************ 00:15:59.596 START TEST nvmf_ns_hotplug_stress 00:15:59.596 ************************************ 00:15:59.596 17:18:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:59.596 * Looking for test storage... 00:15:59.596 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:59.596 17:18:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:59.596 17:18:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:59.596 17:18:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:59.856 17:18:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:59.856 17:18:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:59.856 17:18:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:59.856 17:18:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:59.856 17:18:56 -- scripts/common.sh@335 -- # IFS=.-: 00:15:59.856 17:18:56 -- scripts/common.sh@335 -- # read -ra ver1 00:15:59.856 17:18:56 -- scripts/common.sh@336 -- # IFS=.-: 00:15:59.856 17:18:56 -- scripts/common.sh@336 -- # read -ra ver2 00:15:59.856 17:18:56 -- scripts/common.sh@337 -- # local 'op=<' 00:15:59.856 17:18:56 -- scripts/common.sh@339 -- # ver1_l=2 00:15:59.856 17:18:56 -- scripts/common.sh@340 -- # ver2_l=1 00:15:59.856 17:18:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:59.856 17:18:56 -- scripts/common.sh@343 -- # case "$op" in 00:15:59.856 17:18:56 -- scripts/common.sh@344 -- # : 1 00:15:59.856 17:18:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:59.856 17:18:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:59.856 17:18:56 -- scripts/common.sh@364 -- # decimal 1 00:15:59.856 17:18:56 -- scripts/common.sh@352 -- # local d=1 00:15:59.856 17:18:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:59.856 17:18:56 -- scripts/common.sh@354 -- # echo 1 00:15:59.856 17:18:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:59.856 17:18:56 -- scripts/common.sh@365 -- # decimal 2 00:15:59.856 17:18:56 -- scripts/common.sh@352 -- # local d=2 00:15:59.856 17:18:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:59.856 17:18:56 -- scripts/common.sh@354 -- # echo 2 00:15:59.856 17:18:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:59.856 17:18:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:59.856 17:18:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:59.856 17:18:56 -- scripts/common.sh@367 -- # return 0 00:15:59.856 17:18:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:59.856 17:18:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:59.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.856 --rc genhtml_branch_coverage=1 00:15:59.856 --rc genhtml_function_coverage=1 00:15:59.856 --rc genhtml_legend=1 00:15:59.856 --rc geninfo_all_blocks=1 00:15:59.856 --rc geninfo_unexecuted_blocks=1 00:15:59.856 00:15:59.856 ' 00:15:59.856 17:18:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:59.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.856 --rc genhtml_branch_coverage=1 00:15:59.856 --rc genhtml_function_coverage=1 00:15:59.856 --rc genhtml_legend=1 00:15:59.856 --rc geninfo_all_blocks=1 00:15:59.856 --rc geninfo_unexecuted_blocks=1 00:15:59.856 00:15:59.856 ' 00:15:59.856 17:18:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:59.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.856 --rc genhtml_branch_coverage=1 00:15:59.856 --rc genhtml_function_coverage=1 00:15:59.856 --rc genhtml_legend=1 00:15:59.856 --rc geninfo_all_blocks=1 00:15:59.856 --rc geninfo_unexecuted_blocks=1 00:15:59.856 00:15:59.856 ' 00:15:59.856 17:18:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:59.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.856 --rc genhtml_branch_coverage=1 00:15:59.856 --rc genhtml_function_coverage=1 00:15:59.856 --rc genhtml_legend=1 00:15:59.856 --rc geninfo_all_blocks=1 00:15:59.856 --rc geninfo_unexecuted_blocks=1 00:15:59.856 00:15:59.856 ' 00:15:59.856 17:18:56 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.856 17:18:56 -- nvmf/common.sh@7 -- # uname -s 00:15:59.856 17:18:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.856 17:18:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.856 17:18:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.856 17:18:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.856 17:18:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.856 17:18:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.856 17:18:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.856 17:18:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.856 17:18:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.856 17:18:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.856 17:18:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:59.856 17:18:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:59.856 17:18:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.856 17:18:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.856 17:18:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.856 17:18:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:59.856 17:18:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.856 17:18:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.856 17:18:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.856 17:18:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.857 17:18:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.857 17:18:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.857 17:18:56 -- paths/export.sh@5 -- # export PATH 00:15:59.857 17:18:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.857 17:18:56 -- nvmf/common.sh@46 -- # : 0 00:15:59.857 17:18:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:59.857 17:18:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:59.857 17:18:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:59.857 17:18:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.857 17:18:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.857 17:18:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:59.857 17:18:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:59.857 17:18:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:59.857 17:18:56 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:59.857 17:18:56 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:59.857 17:18:56 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:59.857 17:18:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.857 17:18:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:59.857 17:18:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:59.857 17:18:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:59.857 17:18:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.857 17:18:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.857 17:18:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.857 17:18:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:59.857 17:18:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:59.857 17:18:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:59.857 17:18:56 -- common/autotest_common.sh@10 -- # set +x 00:16:06.425 17:19:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:06.425 17:19:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:06.425 17:19:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:06.425 17:19:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:06.425 17:19:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:06.425 17:19:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:06.425 17:19:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:06.425 17:19:02 -- nvmf/common.sh@294 -- # net_devs=() 00:16:06.425 17:19:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:06.425 17:19:02 -- nvmf/common.sh@295 -- # e810=() 00:16:06.425 17:19:02 -- nvmf/common.sh@295 -- # local -ga e810 00:16:06.425 17:19:02 -- nvmf/common.sh@296 -- # x722=() 00:16:06.425 17:19:02 -- nvmf/common.sh@296 -- # local -ga x722 00:16:06.425 17:19:02 -- nvmf/common.sh@297 -- # mlx=() 00:16:06.425 17:19:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:06.425 17:19:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.425 17:19:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.425 17:19:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.425 17:19:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.425 17:19:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.425 17:19:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.425 17:19:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.425 17:19:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.425 17:19:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.425 17:19:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.425 17:19:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.425 17:19:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:06.425 17:19:02 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:06.425 17:19:02 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:06.425 17:19:02 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:06.425 17:19:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:06.425 17:19:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:06.425 17:19:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:06.425 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:06.425 17:19:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:06.425 17:19:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:06.425 17:19:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:06.425 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:06.425 17:19:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:06.425 17:19:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:06.425 17:19:02 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:06.425 17:19:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.425 17:19:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:06.425 17:19:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.425 17:19:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:06.425 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:06.425 17:19:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.425 17:19:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:06.425 17:19:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.425 17:19:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:06.425 17:19:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.425 17:19:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:06.425 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:06.425 17:19:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.425 17:19:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:06.425 17:19:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:06.425 17:19:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:06.425 17:19:02 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:06.425 17:19:02 -- nvmf/common.sh@57 -- # uname 00:16:06.425 17:19:02 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:06.425 17:19:02 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:06.425 17:19:02 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:06.425 17:19:02 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:06.425 17:19:02 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:06.425 17:19:02 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:06.425 17:19:02 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:06.425 17:19:02 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:06.425 17:19:02 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:06.425 17:19:02 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:06.425 17:19:02 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:06.425 17:19:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:06.425 17:19:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:06.425 17:19:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:06.425 17:19:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:06.425 17:19:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:06.425 17:19:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:06.425 17:19:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:06.425 17:19:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:06.425 17:19:02 -- nvmf/common.sh@104 -- # continue 2 00:16:06.425 17:19:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:06.425 17:19:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:06.425 17:19:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:06.425 17:19:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:06.425 17:19:02 -- nvmf/common.sh@104 -- # continue 2 00:16:06.425 17:19:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:06.425 17:19:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:06.425 17:19:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:06.425 17:19:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:06.425 17:19:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:06.425 17:19:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:06.425 17:19:02 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:06.425 17:19:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:06.425 17:19:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:06.425 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:06.425 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:06.425 altname enp217s0f0np0 00:16:06.425 altname ens818f0np0 00:16:06.425 inet 192.168.100.8/24 scope global mlx_0_0 00:16:06.425 valid_lft forever preferred_lft forever 00:16:06.425 17:19:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:06.425 17:19:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:06.425 17:19:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:06.425 17:19:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:06.425 17:19:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:06.425 17:19:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:06.425 17:19:03 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:06.425 17:19:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:06.425 17:19:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:06.425 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:06.425 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:06.425 altname enp217s0f1np1 00:16:06.425 altname ens818f1np1 00:16:06.425 inet 192.168.100.9/24 scope global mlx_0_1 00:16:06.425 valid_lft forever preferred_lft forever 00:16:06.425 17:19:03 -- nvmf/common.sh@410 -- # return 0 00:16:06.425 17:19:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:06.425 17:19:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:06.425 17:19:03 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:06.425 17:19:03 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:06.425 17:19:03 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:06.425 17:19:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:06.425 17:19:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:06.425 17:19:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:06.426 17:19:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:06.426 17:19:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:06.426 17:19:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:06.426 17:19:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:06.426 17:19:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:06.426 17:19:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:06.426 17:19:03 -- nvmf/common.sh@104 -- # continue 2 00:16:06.426 17:19:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:06.426 17:19:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:06.426 17:19:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:06.426 17:19:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:06.426 17:19:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:06.426 17:19:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:06.426 17:19:03 -- nvmf/common.sh@104 -- # continue 2 00:16:06.426 17:19:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:06.426 17:19:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:06.426 17:19:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:06.426 17:19:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:06.426 17:19:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:06.426 17:19:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:06.426 17:19:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:06.426 17:19:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:06.426 17:19:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:06.426 17:19:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:06.426 17:19:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:06.426 17:19:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:06.426 17:19:03 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:06.426 192.168.100.9' 00:16:06.426 17:19:03 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:06.426 192.168.100.9' 00:16:06.426 17:19:03 -- nvmf/common.sh@445 -- # head -n 1 00:16:06.426 17:19:03 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:06.426 17:19:03 -- nvmf/common.sh@446 -- # head -n 1 00:16:06.426 17:19:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:06.426 192.168.100.9' 00:16:06.426 17:19:03 -- nvmf/common.sh@446 -- # tail -n +2 00:16:06.426 17:19:03 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:06.426 17:19:03 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:06.426 17:19:03 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:06.426 17:19:03 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:06.426 17:19:03 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:06.426 17:19:03 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:06.685 17:19:03 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:16:06.685 17:19:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:06.685 17:19:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:06.685 17:19:03 -- common/autotest_common.sh@10 -- # set +x 00:16:06.685 17:19:03 -- nvmf/common.sh@469 -- # nvmfpid=1313065 00:16:06.685 17:19:03 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:06.685 17:19:03 -- nvmf/common.sh@470 -- # waitforlisten 1313065 00:16:06.685 17:19:03 -- common/autotest_common.sh@829 -- # '[' -z 1313065 ']' 00:16:06.685 17:19:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.685 17:19:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.685 17:19:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.685 17:19:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.685 17:19:03 -- common/autotest_common.sh@10 -- # set +x 00:16:06.685 [2024-12-14 17:19:03.171386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:06.685 [2024-12-14 17:19:03.171433] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.685 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.685 [2024-12-14 17:19:03.246100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:06.685 [2024-12-14 17:19:03.282849] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:06.685 [2024-12-14 17:19:03.282962] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.685 [2024-12-14 17:19:03.282972] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.685 [2024-12-14 17:19:03.282981] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.685 [2024-12-14 17:19:03.283089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.685 [2024-12-14 17:19:03.283109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.685 [2024-12-14 17:19:03.283116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.622 17:19:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.622 17:19:03 -- common/autotest_common.sh@862 -- # return 0 00:16:07.622 17:19:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:07.622 17:19:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:07.622 17:19:03 -- common/autotest_common.sh@10 -- # set +x 00:16:07.622 17:19:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.622 17:19:04 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:16:07.622 17:19:04 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:07.622 [2024-12-14 17:19:04.218721] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc52900/0xc56db0) succeed. 00:16:07.622 [2024-12-14 17:19:04.227799] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc53e00/0xc98450) succeed. 00:16:07.880 17:19:04 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:07.880 17:19:04 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:08.139 [2024-12-14 17:19:04.703722] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:08.139 17:19:04 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:08.397 17:19:04 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:08.656 Malloc0 00:16:08.656 17:19:05 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:08.656 Delay0 00:16:08.656 17:19:05 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:08.915 17:19:05 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:09.172 NULL1 00:16:09.172 17:19:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:09.172 17:19:05 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:09.172 17:19:05 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1313521 00:16:09.431 17:19:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:09.431 17:19:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.431 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.367 Read completed with error (sct=0, sc=11) 00:16:10.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.367 17:19:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:10.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.626 17:19:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:16:10.626 17:19:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:10.885 true 00:16:10.885 17:19:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:10.885 17:19:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.822 17:19:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:11.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:11.822 17:19:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:16:11.822 17:19:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:12.081 true 00:16:12.081 17:19:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:12.081 17:19:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.018 17:19:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:13.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.018 17:19:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:16:13.018 17:19:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:13.277 true 00:16:13.277 17:19:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:13.277 17:19:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.213 17:19:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:14.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.213 17:19:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:16:14.213 17:19:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:14.472 true 00:16:14.472 17:19:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:14.472 17:19:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:15.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.379 17:19:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:15.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.379 17:19:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:16:15.379 17:19:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:15.637 true 00:16:15.637 17:19:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:15.637 17:19:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.574 17:19:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:16.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.574 17:19:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:16:16.574 17:19:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:16.833 true 00:16:16.833 17:19:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:16.833 17:19:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.769 17:19:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:17.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.769 17:19:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:16:17.769 17:19:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:18.028 true 00:16:18.028 17:19:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:18.028 17:19:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.968 17:19:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:18.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.968 17:19:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:16:18.968 17:19:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:19.227 true 00:16:19.227 17:19:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:19.227 17:19:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:20.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.164 17:19:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:20.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.164 17:19:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:16:20.164 17:19:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:20.423 true 00:16:20.423 17:19:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:20.423 17:19:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:21.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.360 17:19:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:21.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.360 17:19:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:16:21.360 17:19:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:21.619 true 00:16:21.619 17:19:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:21.619 17:19:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.556 17:19:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.556 17:19:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:16:22.556 17:19:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:22.815 true 00:16:22.815 17:19:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:22.815 17:19:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.753 17:19:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:23.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.012 17:19:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:16:24.012 17:19:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:24.012 true 00:16:24.012 17:19:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:24.012 17:19:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.948 17:19:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:24.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.207 17:19:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:16:25.207 17:19:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:25.207 true 00:16:25.207 17:19:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:25.207 17:19:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:26.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.147 17:19:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:26.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.406 17:19:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:16:26.406 17:19:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:26.406 true 00:16:26.406 17:19:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:26.406 17:19:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.343 17:19:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:27.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:27.602 17:19:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:16:27.602 17:19:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:27.602 true 00:16:27.602 17:19:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:27.602 17:19:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.544 17:19:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.804 17:19:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:16:28.804 17:19:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:28.804 true 00:16:28.804 17:19:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:28.804 17:19:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:29.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.741 17:19:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:29.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.000 17:19:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:16:30.000 17:19:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:30.000 true 00:16:30.000 17:19:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:30.000 17:19:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.937 17:19:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:30.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.197 17:19:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:16:31.197 17:19:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:31.197 true 00:16:31.197 17:19:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:31.197 17:19:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.135 17:19:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.394 17:19:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:16:32.394 17:19:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:32.394 true 00:16:32.394 17:19:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:32.394 17:19:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.330 17:19:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:33.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.588 17:19:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:16:33.588 17:19:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:33.588 true 00:16:33.588 17:19:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:33.588 17:19:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.523 17:19:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:34.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.781 17:19:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:16:34.781 17:19:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:34.781 true 00:16:34.781 17:19:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:34.781 17:19:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.715 17:19:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.974 17:19:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:16:35.974 17:19:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:35.974 true 00:16:35.974 17:19:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:35.974 17:19:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.233 17:19:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:36.491 17:19:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:16:36.491 17:19:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:36.749 true 00:16:36.749 17:19:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:36.749 17:19:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.685 17:19:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:37.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.943 17:19:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:37.943 17:19:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:38.201 true 00:16:38.201 17:19:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:38.201 17:19:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.135 17:19:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:39.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.135 17:19:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:39.135 17:19:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:39.394 true 00:16:39.394 17:19:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:39.394 17:19:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.329 17:19:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:40.329 17:19:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:40.329 17:19:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:40.587 true 00:16:40.587 17:19:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:40.587 17:19:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.587 17:19:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:40.845 17:19:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:40.845 17:19:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:41.104 true 00:16:41.104 17:19:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:41.104 17:19:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.363 17:19:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:41.363 17:19:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:41.363 17:19:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:41.621 true 00:16:41.621 17:19:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:41.621 17:19:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.879 17:19:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:42.138 17:19:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:42.138 17:19:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:42.138 true 00:16:42.138 Initializing NVMe Controllers 00:16:42.138 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:42.138 Controller IO queue size 128, less than required. 00:16:42.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.138 Controller IO queue size 128, less than required. 00:16:42.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.138 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:42.138 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:42.138 Initialization complete. Launching workers. 00:16:42.138 ======================================================== 00:16:42.138 Latency(us) 00:16:42.138 Device Information : IOPS MiB/s Average min max 00:16:42.138 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5782.93 2.82 19745.48 906.76 1132276.40 00:16:42.138 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 36220.93 17.69 3533.76 1978.86 282290.60 00:16:42.138 ======================================================== 00:16:42.138 Total : 42003.87 20.51 5765.73 906.76 1132276.40 00:16:42.138 00:16:42.138 17:19:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1313521 00:16:42.138 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1313521) - No such process 00:16:42.138 17:19:38 -- target/ns_hotplug_stress.sh@53 -- # wait 1313521 00:16:42.138 17:19:38 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.397 17:19:38 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:42.655 17:19:39 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:42.655 17:19:39 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:42.655 17:19:39 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:42.655 17:19:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:42.655 17:19:39 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:42.655 null0 00:16:42.655 17:19:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:42.655 17:19:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:42.655 17:19:39 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:42.914 null1 00:16:42.914 17:19:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:42.914 17:19:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:42.914 17:19:39 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:43.171 null2 00:16:43.171 17:19:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:43.171 17:19:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:43.171 17:19:39 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:43.171 null3 00:16:43.171 17:19:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:43.172 17:19:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:43.172 17:19:39 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:43.430 null4 00:16:43.430 17:19:40 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:43.430 17:19:40 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:43.430 17:19:40 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:43.689 null5 00:16:43.689 17:19:40 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:43.689 17:19:40 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:43.689 17:19:40 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:43.948 null6 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:43.948 null7 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@66 -- # wait 1319558 1319559 1319562 1319564 1319567 1319569 1319571 1319572 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:43.948 17:19:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:44.207 17:19:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:44.207 17:19:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:44.207 17:19:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:44.207 17:19:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:44.207 17:19:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:44.207 17:19:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:44.207 17:19:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:44.207 17:19:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:44.467 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.467 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.467 17:19:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:44.467 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.467 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.467 17:19:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:44.467 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.467 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.467 17:19:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:44.467 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.467 17:19:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.467 17:19:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:44.467 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.467 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.467 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:44.467 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.467 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.467 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:44.467 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.467 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.467 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:44.467 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.467 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.467 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.727 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:44.986 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.986 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.986 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:44.986 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:44.986 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:44.986 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:44.986 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:44.986 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:44.987 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:44.987 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:44.987 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:44.987 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:44.987 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:44.987 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.246 17:19:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:45.511 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:45.511 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.511 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:45.511 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:45.511 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:45.511 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:45.511 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:45.511 17:19:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.511 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:45.859 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:45.859 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:45.859 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:45.859 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:45.859 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.859 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:45.859 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:45.859 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:46.145 17:19:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.404 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.404 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.404 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:46.404 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.404 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.405 17:19:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.664 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:46.923 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:46.923 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.923 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.923 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:46.923 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:46.923 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:46.923 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:46.923 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:47.182 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.182 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.182 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:47.182 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.182 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.182 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.183 17:19:43 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:47.442 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:47.442 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.442 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:47.442 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:47.442 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:47.442 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:47.442 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:47.442 17:19:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.442 17:19:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:47.701 17:19:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:47.701 17:19:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:47.701 17:19:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.701 17:19:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:47.701 17:19:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:47.701 17:19:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:47.701 17:19:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:47.701 17:19:44 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:47.960 17:19:44 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:47.960 17:19:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:47.960 17:19:44 -- nvmf/common.sh@116 -- # sync 00:16:47.960 17:19:44 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:47.960 17:19:44 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:47.960 17:19:44 -- nvmf/common.sh@119 -- # set +e 00:16:47.960 17:19:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:47.960 17:19:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:47.960 rmmod nvme_rdma 00:16:47.960 rmmod nvme_fabrics 00:16:47.960 17:19:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:47.960 17:19:44 -- nvmf/common.sh@123 -- # set -e 00:16:47.960 17:19:44 -- nvmf/common.sh@124 -- # return 0 00:16:47.960 17:19:44 -- nvmf/common.sh@477 -- # '[' -n 1313065 ']' 00:16:47.960 17:19:44 -- nvmf/common.sh@478 -- # killprocess 1313065 00:16:47.960 17:19:44 -- common/autotest_common.sh@936 -- # '[' -z 1313065 ']' 00:16:47.960 17:19:44 -- common/autotest_common.sh@940 -- # kill -0 1313065 00:16:47.960 17:19:44 -- common/autotest_common.sh@941 -- # uname 00:16:47.960 17:19:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.960 17:19:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1313065 00:16:47.960 17:19:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:47.960 17:19:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:47.960 17:19:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1313065' 00:16:47.960 killing process with pid 1313065 00:16:47.960 17:19:44 -- common/autotest_common.sh@955 -- # kill 1313065 00:16:47.960 17:19:44 -- common/autotest_common.sh@960 -- # wait 1313065 00:16:48.220 17:19:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:48.220 17:19:44 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:48.220 00:16:48.220 real 0m48.694s 00:16:48.220 user 3m19.597s 00:16:48.220 sys 0m13.717s 00:16:48.220 17:19:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:48.220 17:19:44 -- common/autotest_common.sh@10 -- # set +x 00:16:48.220 ************************************ 00:16:48.220 END TEST nvmf_ns_hotplug_stress 00:16:48.220 ************************************ 00:16:48.479 17:19:44 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:48.480 17:19:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:48.480 17:19:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:48.480 17:19:44 -- common/autotest_common.sh@10 -- # set +x 00:16:48.480 ************************************ 00:16:48.480 START TEST nvmf_connect_stress 00:16:48.480 ************************************ 00:16:48.480 17:19:44 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:48.480 * Looking for test storage... 00:16:48.480 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:48.480 17:19:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:48.480 17:19:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:48.480 17:19:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:48.480 17:19:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:48.480 17:19:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:48.480 17:19:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:48.480 17:19:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:48.480 17:19:45 -- scripts/common.sh@335 -- # IFS=.-: 00:16:48.480 17:19:45 -- scripts/common.sh@335 -- # read -ra ver1 00:16:48.480 17:19:45 -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.480 17:19:45 -- scripts/common.sh@336 -- # read -ra ver2 00:16:48.480 17:19:45 -- scripts/common.sh@337 -- # local 'op=<' 00:16:48.480 17:19:45 -- scripts/common.sh@339 -- # ver1_l=2 00:16:48.480 17:19:45 -- scripts/common.sh@340 -- # ver2_l=1 00:16:48.480 17:19:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:48.480 17:19:45 -- scripts/common.sh@343 -- # case "$op" in 00:16:48.480 17:19:45 -- scripts/common.sh@344 -- # : 1 00:16:48.480 17:19:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:48.480 17:19:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.480 17:19:45 -- scripts/common.sh@364 -- # decimal 1 00:16:48.480 17:19:45 -- scripts/common.sh@352 -- # local d=1 00:16:48.480 17:19:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.480 17:19:45 -- scripts/common.sh@354 -- # echo 1 00:16:48.480 17:19:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:48.480 17:19:45 -- scripts/common.sh@365 -- # decimal 2 00:16:48.480 17:19:45 -- scripts/common.sh@352 -- # local d=2 00:16:48.480 17:19:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.480 17:19:45 -- scripts/common.sh@354 -- # echo 2 00:16:48.480 17:19:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:48.480 17:19:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:48.480 17:19:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:48.480 17:19:45 -- scripts/common.sh@367 -- # return 0 00:16:48.480 17:19:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.480 17:19:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:48.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.480 --rc genhtml_branch_coverage=1 00:16:48.480 --rc genhtml_function_coverage=1 00:16:48.480 --rc genhtml_legend=1 00:16:48.480 --rc geninfo_all_blocks=1 00:16:48.480 --rc geninfo_unexecuted_blocks=1 00:16:48.480 00:16:48.480 ' 00:16:48.480 17:19:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:48.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.480 --rc genhtml_branch_coverage=1 00:16:48.480 --rc genhtml_function_coverage=1 00:16:48.480 --rc genhtml_legend=1 00:16:48.480 --rc geninfo_all_blocks=1 00:16:48.480 --rc geninfo_unexecuted_blocks=1 00:16:48.480 00:16:48.480 ' 00:16:48.480 17:19:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:48.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.480 --rc genhtml_branch_coverage=1 00:16:48.480 --rc genhtml_function_coverage=1 00:16:48.480 --rc genhtml_legend=1 00:16:48.480 --rc geninfo_all_blocks=1 00:16:48.480 --rc geninfo_unexecuted_blocks=1 00:16:48.480 00:16:48.480 ' 00:16:48.480 17:19:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:48.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.480 --rc genhtml_branch_coverage=1 00:16:48.480 --rc genhtml_function_coverage=1 00:16:48.480 --rc genhtml_legend=1 00:16:48.480 --rc geninfo_all_blocks=1 00:16:48.480 --rc geninfo_unexecuted_blocks=1 00:16:48.480 00:16:48.480 ' 00:16:48.480 17:19:45 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.480 17:19:45 -- nvmf/common.sh@7 -- # uname -s 00:16:48.480 17:19:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.480 17:19:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.480 17:19:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.480 17:19:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.480 17:19:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.480 17:19:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.480 17:19:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.480 17:19:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.480 17:19:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.480 17:19:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.480 17:19:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:48.480 17:19:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:48.480 17:19:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.480 17:19:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.480 17:19:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.480 17:19:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:48.480 17:19:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.480 17:19:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.480 17:19:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.480 17:19:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.480 17:19:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.480 17:19:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.480 17:19:45 -- paths/export.sh@5 -- # export PATH 00:16:48.480 17:19:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.480 17:19:45 -- nvmf/common.sh@46 -- # : 0 00:16:48.480 17:19:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:48.480 17:19:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:48.480 17:19:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:48.480 17:19:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.480 17:19:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.480 17:19:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:48.480 17:19:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:48.480 17:19:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:48.480 17:19:45 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:48.480 17:19:45 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:48.480 17:19:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.480 17:19:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:48.480 17:19:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:48.480 17:19:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:48.480 17:19:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.480 17:19:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.480 17:19:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.480 17:19:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:48.480 17:19:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:48.480 17:19:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:48.480 17:19:45 -- common/autotest_common.sh@10 -- # set +x 00:16:56.605 17:19:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:56.605 17:19:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:56.605 17:19:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:56.605 17:19:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:56.605 17:19:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:56.605 17:19:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:56.605 17:19:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:56.605 17:19:51 -- nvmf/common.sh@294 -- # net_devs=() 00:16:56.605 17:19:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:56.605 17:19:51 -- nvmf/common.sh@295 -- # e810=() 00:16:56.605 17:19:51 -- nvmf/common.sh@295 -- # local -ga e810 00:16:56.605 17:19:51 -- nvmf/common.sh@296 -- # x722=() 00:16:56.605 17:19:51 -- nvmf/common.sh@296 -- # local -ga x722 00:16:56.605 17:19:51 -- nvmf/common.sh@297 -- # mlx=() 00:16:56.605 17:19:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:56.605 17:19:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.605 17:19:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.605 17:19:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.605 17:19:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.605 17:19:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.605 17:19:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.605 17:19:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.605 17:19:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.605 17:19:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.605 17:19:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.605 17:19:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.605 17:19:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:56.605 17:19:51 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:56.605 17:19:51 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:56.605 17:19:51 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:56.605 17:19:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:56.605 17:19:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:56.605 17:19:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:56.605 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:56.605 17:19:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:56.605 17:19:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:56.605 17:19:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:56.605 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:56.605 17:19:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:56.605 17:19:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:56.605 17:19:51 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:56.605 17:19:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.605 17:19:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:56.605 17:19:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.605 17:19:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:56.605 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:56.605 17:19:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.605 17:19:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:56.605 17:19:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.605 17:19:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:56.605 17:19:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.605 17:19:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:56.605 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:56.605 17:19:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.605 17:19:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:56.605 17:19:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:56.605 17:19:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:56.605 17:19:51 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:56.605 17:19:51 -- nvmf/common.sh@57 -- # uname 00:16:56.605 17:19:51 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:56.605 17:19:51 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:56.605 17:19:51 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:56.605 17:19:51 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:56.605 17:19:51 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:56.605 17:19:51 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:56.605 17:19:51 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:56.605 17:19:51 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:56.605 17:19:51 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:56.605 17:19:51 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:56.605 17:19:51 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:56.605 17:19:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:56.605 17:19:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:56.605 17:19:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:56.605 17:19:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:56.605 17:19:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:56.605 17:19:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:56.605 17:19:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:56.605 17:19:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:56.605 17:19:51 -- nvmf/common.sh@104 -- # continue 2 00:16:56.605 17:19:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:56.605 17:19:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:56.605 17:19:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:56.605 17:19:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:56.605 17:19:51 -- nvmf/common.sh@104 -- # continue 2 00:16:56.605 17:19:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:56.605 17:19:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:56.605 17:19:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:56.605 17:19:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:56.605 17:19:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:56.605 17:19:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:56.605 17:19:51 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:56.605 17:19:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:56.605 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:56.605 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:56.605 altname enp217s0f0np0 00:16:56.605 altname ens818f0np0 00:16:56.605 inet 192.168.100.8/24 scope global mlx_0_0 00:16:56.605 valid_lft forever preferred_lft forever 00:16:56.605 17:19:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:56.605 17:19:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:56.605 17:19:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:56.605 17:19:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:56.605 17:19:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:56.605 17:19:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:56.605 17:19:51 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:56.605 17:19:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:56.605 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:56.605 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:56.605 altname enp217s0f1np1 00:16:56.605 altname ens818f1np1 00:16:56.605 inet 192.168.100.9/24 scope global mlx_0_1 00:16:56.605 valid_lft forever preferred_lft forever 00:16:56.605 17:19:51 -- nvmf/common.sh@410 -- # return 0 00:16:56.605 17:19:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:56.605 17:19:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:56.605 17:19:51 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:56.605 17:19:51 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:56.605 17:19:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:56.605 17:19:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:56.605 17:19:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:56.605 17:19:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:56.605 17:19:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:56.605 17:19:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:56.605 17:19:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:56.605 17:19:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:56.605 17:19:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:56.605 17:19:51 -- nvmf/common.sh@104 -- # continue 2 00:16:56.606 17:19:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:56.606 17:19:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:56.606 17:19:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:56.606 17:19:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:56.606 17:19:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:56.606 17:19:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:56.606 17:19:51 -- nvmf/common.sh@104 -- # continue 2 00:16:56.606 17:19:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:56.606 17:19:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:56.606 17:19:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:56.606 17:19:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:56.606 17:19:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:56.606 17:19:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:56.606 17:19:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:56.606 17:19:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:56.606 17:19:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:56.606 17:19:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:56.606 17:19:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:56.606 17:19:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:56.606 17:19:52 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:56.606 192.168.100.9' 00:16:56.606 17:19:52 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:56.606 192.168.100.9' 00:16:56.606 17:19:52 -- nvmf/common.sh@445 -- # head -n 1 00:16:56.606 17:19:52 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:56.606 17:19:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:56.606 192.168.100.9' 00:16:56.606 17:19:52 -- nvmf/common.sh@446 -- # tail -n +2 00:16:56.606 17:19:52 -- nvmf/common.sh@446 -- # head -n 1 00:16:56.606 17:19:52 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:56.606 17:19:52 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:56.606 17:19:52 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:56.606 17:19:52 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:56.606 17:19:52 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:56.606 17:19:52 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:56.606 17:19:52 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:56.606 17:19:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:56.606 17:19:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.606 17:19:52 -- common/autotest_common.sh@10 -- # set +x 00:16:56.606 17:19:52 -- nvmf/common.sh@469 -- # nvmfpid=1323962 00:16:56.606 17:19:52 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:56.606 17:19:52 -- nvmf/common.sh@470 -- # waitforlisten 1323962 00:16:56.606 17:19:52 -- common/autotest_common.sh@829 -- # '[' -z 1323962 ']' 00:16:56.606 17:19:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.606 17:19:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.606 17:19:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.606 17:19:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.606 17:19:52 -- common/autotest_common.sh@10 -- # set +x 00:16:56.606 [2024-12-14 17:19:52.113823] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:56.606 [2024-12-14 17:19:52.113872] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.606 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.606 [2024-12-14 17:19:52.182615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:56.606 [2024-12-14 17:19:52.219296] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:56.606 [2024-12-14 17:19:52.219410] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.606 [2024-12-14 17:19:52.219419] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.606 [2024-12-14 17:19:52.219428] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.606 [2024-12-14 17:19:52.219532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.606 [2024-12-14 17:19:52.219616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.606 [2024-12-14 17:19:52.219618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.606 17:19:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.606 17:19:52 -- common/autotest_common.sh@862 -- # return 0 00:16:56.606 17:19:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:56.606 17:19:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:56.606 17:19:52 -- common/autotest_common.sh@10 -- # set +x 00:16:56.606 17:19:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.606 17:19:52 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:56.606 17:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.606 17:19:52 -- common/autotest_common.sh@10 -- # set +x 00:16:56.606 [2024-12-14 17:19:52.999302] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x855900/0x859db0) succeed. 00:16:56.606 [2024-12-14 17:19:53.008198] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x856e00/0x89b450) succeed. 00:16:56.606 17:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.606 17:19:53 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:56.606 17:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.606 17:19:53 -- common/autotest_common.sh@10 -- # set +x 00:16:56.606 17:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.606 17:19:53 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:56.606 17:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.606 17:19:53 -- common/autotest_common.sh@10 -- # set +x 00:16:56.606 [2024-12-14 17:19:53.122802] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:56.606 17:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.606 17:19:53 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:56.606 17:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.606 17:19:53 -- common/autotest_common.sh@10 -- # set +x 00:16:56.606 NULL1 00:16:56.606 17:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.606 17:19:53 -- target/connect_stress.sh@21 -- # PERF_PID=1324104 00:16:56.606 17:19:53 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:56.606 17:19:53 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:56.606 17:19:53 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:56.606 17:19:53 -- target/connect_stress.sh@28 -- # cat 00:16:56.606 17:19:53 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:16:56.606 17:19:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.606 17:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.606 17:19:53 -- common/autotest_common.sh@10 -- # set +x 00:16:57.174 17:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.174 17:19:53 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:16:57.174 17:19:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.174 17:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.174 17:19:53 -- common/autotest_common.sh@10 -- # set +x 00:16:57.433 17:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.433 17:19:53 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:16:57.433 17:19:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.433 17:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.433 17:19:53 -- common/autotest_common.sh@10 -- # set +x 00:16:57.692 17:19:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.692 17:19:54 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:16:57.692 17:19:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.692 17:19:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.692 17:19:54 -- common/autotest_common.sh@10 -- # set +x 00:16:57.951 17:19:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.951 17:19:54 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:16:57.951 17:19:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.951 17:19:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.951 17:19:54 -- common/autotest_common.sh@10 -- # set +x 00:16:58.209 17:19:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.209 17:19:54 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:16:58.209 17:19:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.209 17:19:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.209 17:19:54 -- common/autotest_common.sh@10 -- # set +x 00:16:58.777 17:19:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.777 17:19:55 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:16:58.777 17:19:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.777 17:19:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.777 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:16:59.036 17:19:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.036 17:19:55 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:16:59.036 17:19:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.036 17:19:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.036 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:16:59.294 17:19:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.294 17:19:55 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:16:59.294 17:19:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.294 17:19:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.294 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:16:59.553 17:19:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.553 17:19:56 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:16:59.553 17:19:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.553 17:19:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.553 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:16:59.812 17:19:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.812 17:19:56 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:16:59.812 17:19:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.812 17:19:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.812 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 17:19:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.380 17:19:56 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:00.380 17:19:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.380 17:19:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.380 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:17:00.640 17:19:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.640 17:19:57 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:00.640 17:19:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.640 17:19:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.640 17:19:57 -- common/autotest_common.sh@10 -- # set +x 00:17:00.899 17:19:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.899 17:19:57 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:00.899 17:19:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.899 17:19:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.899 17:19:57 -- common/autotest_common.sh@10 -- # set +x 00:17:01.157 17:19:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.158 17:19:57 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:01.158 17:19:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.158 17:19:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.158 17:19:57 -- common/autotest_common.sh@10 -- # set +x 00:17:01.417 17:19:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.417 17:19:58 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:01.417 17:19:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.417 17:19:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.417 17:19:58 -- common/autotest_common.sh@10 -- # set +x 00:17:01.985 17:19:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.985 17:19:58 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:01.985 17:19:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.985 17:19:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.985 17:19:58 -- common/autotest_common.sh@10 -- # set +x 00:17:02.244 17:19:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.244 17:19:58 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:02.244 17:19:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.244 17:19:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.244 17:19:58 -- common/autotest_common.sh@10 -- # set +x 00:17:02.502 17:19:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.503 17:19:59 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:02.503 17:19:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.503 17:19:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.503 17:19:59 -- common/autotest_common.sh@10 -- # set +x 00:17:02.761 17:19:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.761 17:19:59 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:02.761 17:19:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.761 17:19:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.761 17:19:59 -- common/autotest_common.sh@10 -- # set +x 00:17:03.329 17:19:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.329 17:19:59 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:03.329 17:19:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.329 17:19:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.329 17:19:59 -- common/autotest_common.sh@10 -- # set +x 00:17:03.588 17:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.588 17:20:00 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:03.588 17:20:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.588 17:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.588 17:20:00 -- common/autotest_common.sh@10 -- # set +x 00:17:03.847 17:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.847 17:20:00 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:03.847 17:20:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.847 17:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.847 17:20:00 -- common/autotest_common.sh@10 -- # set +x 00:17:04.106 17:20:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.106 17:20:00 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:04.106 17:20:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.106 17:20:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.106 17:20:00 -- common/autotest_common.sh@10 -- # set +x 00:17:04.365 17:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.365 17:20:01 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:04.365 17:20:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.365 17:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.365 17:20:01 -- common/autotest_common.sh@10 -- # set +x 00:17:04.932 17:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.932 17:20:01 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:04.932 17:20:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.932 17:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.932 17:20:01 -- common/autotest_common.sh@10 -- # set +x 00:17:05.190 17:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.190 17:20:01 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:05.190 17:20:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.190 17:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.190 17:20:01 -- common/autotest_common.sh@10 -- # set +x 00:17:05.449 17:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.449 17:20:01 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:05.449 17:20:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.449 17:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.449 17:20:01 -- common/autotest_common.sh@10 -- # set +x 00:17:05.708 17:20:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.708 17:20:02 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:05.708 17:20:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.708 17:20:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.708 17:20:02 -- common/autotest_common.sh@10 -- # set +x 00:17:05.967 17:20:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.225 17:20:02 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:06.225 17:20:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.225 17:20:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.225 17:20:02 -- common/autotest_common.sh@10 -- # set +x 00:17:06.484 17:20:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.484 17:20:02 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:06.484 17:20:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.484 17:20:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.484 17:20:02 -- common/autotest_common.sh@10 -- # set +x 00:17:06.743 17:20:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.743 17:20:03 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:06.743 17:20:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.743 17:20:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.743 17:20:03 -- common/autotest_common.sh@10 -- # set +x 00:17:06.743 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:07.002 17:20:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.002 17:20:03 -- target/connect_stress.sh@34 -- # kill -0 1324104 00:17:07.002 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1324104) - No such process 00:17:07.002 17:20:03 -- target/connect_stress.sh@38 -- # wait 1324104 00:17:07.002 17:20:03 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:07.002 17:20:03 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:07.002 17:20:03 -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:07.002 17:20:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:07.002 17:20:03 -- nvmf/common.sh@116 -- # sync 00:17:07.002 17:20:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:07.002 17:20:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:07.002 17:20:03 -- nvmf/common.sh@119 -- # set +e 00:17:07.002 17:20:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:07.002 17:20:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:07.002 rmmod nvme_rdma 00:17:07.002 rmmod nvme_fabrics 00:17:07.002 17:20:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:07.002 17:20:03 -- nvmf/common.sh@123 -- # set -e 00:17:07.002 17:20:03 -- nvmf/common.sh@124 -- # return 0 00:17:07.002 17:20:03 -- nvmf/common.sh@477 -- # '[' -n 1323962 ']' 00:17:07.002 17:20:03 -- nvmf/common.sh@478 -- # killprocess 1323962 00:17:07.002 17:20:03 -- common/autotest_common.sh@936 -- # '[' -z 1323962 ']' 00:17:07.002 17:20:03 -- common/autotest_common.sh@940 -- # kill -0 1323962 00:17:07.002 17:20:03 -- common/autotest_common.sh@941 -- # uname 00:17:07.002 17:20:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:07.002 17:20:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1323962 00:17:07.262 17:20:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:07.262 17:20:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:07.262 17:20:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1323962' 00:17:07.262 killing process with pid 1323962 00:17:07.262 17:20:03 -- common/autotest_common.sh@955 -- # kill 1323962 00:17:07.262 17:20:03 -- common/autotest_common.sh@960 -- # wait 1323962 00:17:07.521 17:20:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:07.521 17:20:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:07.521 00:17:07.521 real 0m19.052s 00:17:07.521 user 0m42.544s 00:17:07.521 sys 0m7.910s 00:17:07.521 17:20:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:07.521 17:20:03 -- common/autotest_common.sh@10 -- # set +x 00:17:07.521 ************************************ 00:17:07.521 END TEST nvmf_connect_stress 00:17:07.521 ************************************ 00:17:07.521 17:20:04 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:17:07.521 17:20:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:07.521 17:20:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:07.521 17:20:04 -- common/autotest_common.sh@10 -- # set +x 00:17:07.521 ************************************ 00:17:07.521 START TEST nvmf_fused_ordering 00:17:07.521 ************************************ 00:17:07.521 17:20:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:17:07.521 * Looking for test storage... 00:17:07.521 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:07.521 17:20:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:07.521 17:20:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:07.521 17:20:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:07.521 17:20:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:07.521 17:20:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:07.521 17:20:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:07.521 17:20:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:07.521 17:20:04 -- scripts/common.sh@335 -- # IFS=.-: 00:17:07.521 17:20:04 -- scripts/common.sh@335 -- # read -ra ver1 00:17:07.521 17:20:04 -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.521 17:20:04 -- scripts/common.sh@336 -- # read -ra ver2 00:17:07.521 17:20:04 -- scripts/common.sh@337 -- # local 'op=<' 00:17:07.521 17:20:04 -- scripts/common.sh@339 -- # ver1_l=2 00:17:07.521 17:20:04 -- scripts/common.sh@340 -- # ver2_l=1 00:17:07.521 17:20:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:07.521 17:20:04 -- scripts/common.sh@343 -- # case "$op" in 00:17:07.521 17:20:04 -- scripts/common.sh@344 -- # : 1 00:17:07.521 17:20:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:07.521 17:20:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.521 17:20:04 -- scripts/common.sh@364 -- # decimal 1 00:17:07.521 17:20:04 -- scripts/common.sh@352 -- # local d=1 00:17:07.521 17:20:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.521 17:20:04 -- scripts/common.sh@354 -- # echo 1 00:17:07.521 17:20:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:07.521 17:20:04 -- scripts/common.sh@365 -- # decimal 2 00:17:07.780 17:20:04 -- scripts/common.sh@352 -- # local d=2 00:17:07.780 17:20:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.780 17:20:04 -- scripts/common.sh@354 -- # echo 2 00:17:07.780 17:20:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:07.780 17:20:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:07.780 17:20:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:07.780 17:20:04 -- scripts/common.sh@367 -- # return 0 00:17:07.780 17:20:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.780 17:20:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:07.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.780 --rc genhtml_branch_coverage=1 00:17:07.780 --rc genhtml_function_coverage=1 00:17:07.780 --rc genhtml_legend=1 00:17:07.780 --rc geninfo_all_blocks=1 00:17:07.780 --rc geninfo_unexecuted_blocks=1 00:17:07.780 00:17:07.780 ' 00:17:07.780 17:20:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:07.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.780 --rc genhtml_branch_coverage=1 00:17:07.780 --rc genhtml_function_coverage=1 00:17:07.780 --rc genhtml_legend=1 00:17:07.780 --rc geninfo_all_blocks=1 00:17:07.780 --rc geninfo_unexecuted_blocks=1 00:17:07.780 00:17:07.780 ' 00:17:07.780 17:20:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:07.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.780 --rc genhtml_branch_coverage=1 00:17:07.780 --rc genhtml_function_coverage=1 00:17:07.780 --rc genhtml_legend=1 00:17:07.780 --rc geninfo_all_blocks=1 00:17:07.780 --rc geninfo_unexecuted_blocks=1 00:17:07.780 00:17:07.780 ' 00:17:07.780 17:20:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:07.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.780 --rc genhtml_branch_coverage=1 00:17:07.780 --rc genhtml_function_coverage=1 00:17:07.780 --rc genhtml_legend=1 00:17:07.780 --rc geninfo_all_blocks=1 00:17:07.780 --rc geninfo_unexecuted_blocks=1 00:17:07.780 00:17:07.780 ' 00:17:07.780 17:20:04 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.780 17:20:04 -- nvmf/common.sh@7 -- # uname -s 00:17:07.780 17:20:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.780 17:20:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.780 17:20:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.780 17:20:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.780 17:20:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.781 17:20:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.781 17:20:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.781 17:20:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.781 17:20:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.781 17:20:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.781 17:20:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:07.781 17:20:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:07.781 17:20:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.781 17:20:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.781 17:20:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.781 17:20:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:07.781 17:20:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.781 17:20:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.781 17:20:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.781 17:20:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.781 17:20:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.781 17:20:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.781 17:20:04 -- paths/export.sh@5 -- # export PATH 00:17:07.781 17:20:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.781 17:20:04 -- nvmf/common.sh@46 -- # : 0 00:17:07.781 17:20:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:07.781 17:20:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:07.781 17:20:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:07.781 17:20:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.781 17:20:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.781 17:20:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:07.781 17:20:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:07.781 17:20:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:07.781 17:20:04 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:07.781 17:20:04 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:07.781 17:20:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.781 17:20:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:07.781 17:20:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:07.781 17:20:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:07.781 17:20:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.781 17:20:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.781 17:20:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.781 17:20:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:07.781 17:20:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:07.781 17:20:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:07.781 17:20:04 -- common/autotest_common.sh@10 -- # set +x 00:17:14.349 17:20:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:14.349 17:20:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:14.349 17:20:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:14.349 17:20:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:14.349 17:20:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:14.349 17:20:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:14.349 17:20:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:14.349 17:20:10 -- nvmf/common.sh@294 -- # net_devs=() 00:17:14.349 17:20:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:14.349 17:20:10 -- nvmf/common.sh@295 -- # e810=() 00:17:14.349 17:20:10 -- nvmf/common.sh@295 -- # local -ga e810 00:17:14.349 17:20:10 -- nvmf/common.sh@296 -- # x722=() 00:17:14.349 17:20:10 -- nvmf/common.sh@296 -- # local -ga x722 00:17:14.349 17:20:10 -- nvmf/common.sh@297 -- # mlx=() 00:17:14.349 17:20:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:14.349 17:20:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.349 17:20:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.349 17:20:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.349 17:20:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.349 17:20:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.349 17:20:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.349 17:20:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.349 17:20:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.349 17:20:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.349 17:20:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.349 17:20:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.349 17:20:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:14.349 17:20:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:14.349 17:20:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:14.349 17:20:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:14.349 17:20:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:14.349 17:20:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:14.349 17:20:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:14.349 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:14.349 17:20:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:14.349 17:20:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:14.349 17:20:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:14.349 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:14.349 17:20:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:14.349 17:20:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:14.349 17:20:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:14.349 17:20:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.349 17:20:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:14.349 17:20:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.349 17:20:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:14.349 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:14.349 17:20:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.349 17:20:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:14.349 17:20:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.349 17:20:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:14.349 17:20:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.349 17:20:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:14.349 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:14.349 17:20:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.349 17:20:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:14.349 17:20:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:14.349 17:20:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:14.349 17:20:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:14.349 17:20:10 -- nvmf/common.sh@57 -- # uname 00:17:14.349 17:20:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:14.349 17:20:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:14.349 17:20:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:14.349 17:20:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:14.349 17:20:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:14.349 17:20:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:14.349 17:20:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:14.349 17:20:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:14.349 17:20:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:14.349 17:20:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:14.349 17:20:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:14.349 17:20:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:14.349 17:20:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:14.349 17:20:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:14.349 17:20:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:14.349 17:20:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:14.349 17:20:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:14.349 17:20:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.349 17:20:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:14.349 17:20:10 -- nvmf/common.sh@104 -- # continue 2 00:17:14.349 17:20:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:14.349 17:20:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.349 17:20:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.349 17:20:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:14.349 17:20:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:14.349 17:20:10 -- nvmf/common.sh@104 -- # continue 2 00:17:14.349 17:20:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:14.349 17:20:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:14.349 17:20:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:14.349 17:20:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:14.349 17:20:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:14.349 17:20:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:14.349 17:20:11 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:14.349 17:20:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:14.349 17:20:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:14.349 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:14.349 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:14.349 altname enp217s0f0np0 00:17:14.349 altname ens818f0np0 00:17:14.349 inet 192.168.100.8/24 scope global mlx_0_0 00:17:14.349 valid_lft forever preferred_lft forever 00:17:14.349 17:20:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:14.349 17:20:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:14.349 17:20:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:14.349 17:20:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:14.349 17:20:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:14.349 17:20:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:14.349 17:20:11 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:14.349 17:20:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:14.349 17:20:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:14.349 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:14.349 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:14.349 altname enp217s0f1np1 00:17:14.349 altname ens818f1np1 00:17:14.350 inet 192.168.100.9/24 scope global mlx_0_1 00:17:14.350 valid_lft forever preferred_lft forever 00:17:14.350 17:20:11 -- nvmf/common.sh@410 -- # return 0 00:17:14.350 17:20:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:14.350 17:20:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:14.350 17:20:11 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:14.350 17:20:11 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:14.609 17:20:11 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:14.609 17:20:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:14.609 17:20:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:14.609 17:20:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:14.609 17:20:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:14.609 17:20:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:14.609 17:20:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:14.609 17:20:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.609 17:20:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:14.609 17:20:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:14.609 17:20:11 -- nvmf/common.sh@104 -- # continue 2 00:17:14.609 17:20:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:14.609 17:20:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.609 17:20:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:14.609 17:20:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.609 17:20:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:14.609 17:20:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:14.609 17:20:11 -- nvmf/common.sh@104 -- # continue 2 00:17:14.609 17:20:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:14.609 17:20:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:14.609 17:20:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:14.609 17:20:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:14.609 17:20:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:14.609 17:20:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:14.609 17:20:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:14.609 17:20:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:14.609 17:20:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:14.609 17:20:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:14.609 17:20:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:14.609 17:20:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:14.609 17:20:11 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:14.609 192.168.100.9' 00:17:14.609 17:20:11 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:14.609 192.168.100.9' 00:17:14.609 17:20:11 -- nvmf/common.sh@445 -- # head -n 1 00:17:14.609 17:20:11 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:14.609 17:20:11 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:14.609 192.168.100.9' 00:17:14.609 17:20:11 -- nvmf/common.sh@446 -- # tail -n +2 00:17:14.609 17:20:11 -- nvmf/common.sh@446 -- # head -n 1 00:17:14.609 17:20:11 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:14.609 17:20:11 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:14.609 17:20:11 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:14.609 17:20:11 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:14.609 17:20:11 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:14.609 17:20:11 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:14.609 17:20:11 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:14.609 17:20:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:14.609 17:20:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.609 17:20:11 -- common/autotest_common.sh@10 -- # set +x 00:17:14.609 17:20:11 -- nvmf/common.sh@469 -- # nvmfpid=1329347 00:17:14.609 17:20:11 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:14.609 17:20:11 -- nvmf/common.sh@470 -- # waitforlisten 1329347 00:17:14.609 17:20:11 -- common/autotest_common.sh@829 -- # '[' -z 1329347 ']' 00:17:14.609 17:20:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.609 17:20:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.609 17:20:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.609 17:20:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.609 17:20:11 -- common/autotest_common.sh@10 -- # set +x 00:17:14.609 [2024-12-14 17:20:11.193729] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:14.609 [2024-12-14 17:20:11.193779] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.609 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.610 [2024-12-14 17:20:11.263997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.869 [2024-12-14 17:20:11.300732] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:14.869 [2024-12-14 17:20:11.300858] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.869 [2024-12-14 17:20:11.300868] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.869 [2024-12-14 17:20:11.300878] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.869 [2024-12-14 17:20:11.300905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.437 17:20:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.437 17:20:12 -- common/autotest_common.sh@862 -- # return 0 00:17:15.437 17:20:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:15.437 17:20:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.437 17:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:15.437 17:20:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.437 17:20:12 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:15.437 17:20:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.437 17:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:15.437 [2024-12-14 17:20:12.083879] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8ef550/0x8f3a00) succeed. 00:17:15.437 [2024-12-14 17:20:12.092835] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8f0a00/0x9350a0) succeed. 00:17:15.697 17:20:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.697 17:20:12 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:15.697 17:20:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.697 17:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:15.697 17:20:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.697 17:20:12 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:15.697 17:20:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.697 17:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:15.697 [2024-12-14 17:20:12.157736] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:15.697 17:20:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.697 17:20:12 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:15.697 17:20:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.697 17:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:15.697 NULL1 00:17:15.697 17:20:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.697 17:20:12 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:15.697 17:20:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.697 17:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:15.697 17:20:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.697 17:20:12 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:15.697 17:20:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.697 17:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:15.697 17:20:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.697 17:20:12 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:15.697 [2024-12-14 17:20:12.213547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:15.697 [2024-12-14 17:20:12.213600] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329580 ] 00:17:15.697 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.957 Attached to nqn.2016-06.io.spdk:cnode1 00:17:15.957 Namespace ID: 1 size: 1GB 00:17:15.957 fused_ordering(0) 00:17:15.957 fused_ordering(1) 00:17:15.957 fused_ordering(2) 00:17:15.957 fused_ordering(3) 00:17:15.957 fused_ordering(4) 00:17:15.957 fused_ordering(5) 00:17:15.957 fused_ordering(6) 00:17:15.957 fused_ordering(7) 00:17:15.957 fused_ordering(8) 00:17:15.957 fused_ordering(9) 00:17:15.957 fused_ordering(10) 00:17:15.957 fused_ordering(11) 00:17:15.957 fused_ordering(12) 00:17:15.957 fused_ordering(13) 00:17:15.957 fused_ordering(14) 00:17:15.957 fused_ordering(15) 00:17:15.957 fused_ordering(16) 00:17:15.957 fused_ordering(17) 00:17:15.957 fused_ordering(18) 00:17:15.957 fused_ordering(19) 00:17:15.957 fused_ordering(20) 00:17:15.957 fused_ordering(21) 00:17:15.957 fused_ordering(22) 00:17:15.957 fused_ordering(23) 00:17:15.957 fused_ordering(24) 00:17:15.957 fused_ordering(25) 00:17:15.957 fused_ordering(26) 00:17:15.957 fused_ordering(27) 00:17:15.957 fused_ordering(28) 00:17:15.957 fused_ordering(29) 00:17:15.957 fused_ordering(30) 00:17:15.957 fused_ordering(31) 00:17:15.957 fused_ordering(32) 00:17:15.957 fused_ordering(33) 00:17:15.957 fused_ordering(34) 00:17:15.957 fused_ordering(35) 00:17:15.957 fused_ordering(36) 00:17:15.957 fused_ordering(37) 00:17:15.957 fused_ordering(38) 00:17:15.957 fused_ordering(39) 00:17:15.957 fused_ordering(40) 00:17:15.957 fused_ordering(41) 00:17:15.957 fused_ordering(42) 00:17:15.957 fused_ordering(43) 00:17:15.957 fused_ordering(44) 00:17:15.957 fused_ordering(45) 00:17:15.957 fused_ordering(46) 00:17:15.957 fused_ordering(47) 00:17:15.957 fused_ordering(48) 00:17:15.957 fused_ordering(49) 00:17:15.957 fused_ordering(50) 00:17:15.957 fused_ordering(51) 00:17:15.957 fused_ordering(52) 00:17:15.957 fused_ordering(53) 00:17:15.957 fused_ordering(54) 00:17:15.957 fused_ordering(55) 00:17:15.957 fused_ordering(56) 00:17:15.957 fused_ordering(57) 00:17:15.957 fused_ordering(58) 00:17:15.957 fused_ordering(59) 00:17:15.957 fused_ordering(60) 00:17:15.957 fused_ordering(61) 00:17:15.957 fused_ordering(62) 00:17:15.957 fused_ordering(63) 00:17:15.957 fused_ordering(64) 00:17:15.957 fused_ordering(65) 00:17:15.957 fused_ordering(66) 00:17:15.957 fused_ordering(67) 00:17:15.957 fused_ordering(68) 00:17:15.957 fused_ordering(69) 00:17:15.957 fused_ordering(70) 00:17:15.957 fused_ordering(71) 00:17:15.957 fused_ordering(72) 00:17:15.957 fused_ordering(73) 00:17:15.957 fused_ordering(74) 00:17:15.957 fused_ordering(75) 00:17:15.957 fused_ordering(76) 00:17:15.957 fused_ordering(77) 00:17:15.957 fused_ordering(78) 00:17:15.957 fused_ordering(79) 00:17:15.957 fused_ordering(80) 00:17:15.957 fused_ordering(81) 00:17:15.957 fused_ordering(82) 00:17:15.957 fused_ordering(83) 00:17:15.957 fused_ordering(84) 00:17:15.957 fused_ordering(85) 00:17:15.957 fused_ordering(86) 00:17:15.957 fused_ordering(87) 00:17:15.957 fused_ordering(88) 00:17:15.957 fused_ordering(89) 00:17:15.957 fused_ordering(90) 00:17:15.957 fused_ordering(91) 00:17:15.957 fused_ordering(92) 00:17:15.957 fused_ordering(93) 00:17:15.957 fused_ordering(94) 00:17:15.957 fused_ordering(95) 00:17:15.957 fused_ordering(96) 00:17:15.957 fused_ordering(97) 00:17:15.957 fused_ordering(98) 00:17:15.957 fused_ordering(99) 00:17:15.957 fused_ordering(100) 00:17:15.957 fused_ordering(101) 00:17:15.957 fused_ordering(102) 00:17:15.957 fused_ordering(103) 00:17:15.957 fused_ordering(104) 00:17:15.957 fused_ordering(105) 00:17:15.957 fused_ordering(106) 00:17:15.957 fused_ordering(107) 00:17:15.957 fused_ordering(108) 00:17:15.957 fused_ordering(109) 00:17:15.957 fused_ordering(110) 00:17:15.957 fused_ordering(111) 00:17:15.957 fused_ordering(112) 00:17:15.957 fused_ordering(113) 00:17:15.957 fused_ordering(114) 00:17:15.957 fused_ordering(115) 00:17:15.957 fused_ordering(116) 00:17:15.957 fused_ordering(117) 00:17:15.957 fused_ordering(118) 00:17:15.957 fused_ordering(119) 00:17:15.957 fused_ordering(120) 00:17:15.958 fused_ordering(121) 00:17:15.958 fused_ordering(122) 00:17:15.958 fused_ordering(123) 00:17:15.958 fused_ordering(124) 00:17:15.958 fused_ordering(125) 00:17:15.958 fused_ordering(126) 00:17:15.958 fused_ordering(127) 00:17:15.958 fused_ordering(128) 00:17:15.958 fused_ordering(129) 00:17:15.958 fused_ordering(130) 00:17:15.958 fused_ordering(131) 00:17:15.958 fused_ordering(132) 00:17:15.958 fused_ordering(133) 00:17:15.958 fused_ordering(134) 00:17:15.958 fused_ordering(135) 00:17:15.958 fused_ordering(136) 00:17:15.958 fused_ordering(137) 00:17:15.958 fused_ordering(138) 00:17:15.958 fused_ordering(139) 00:17:15.958 fused_ordering(140) 00:17:15.958 fused_ordering(141) 00:17:15.958 fused_ordering(142) 00:17:15.958 fused_ordering(143) 00:17:15.958 fused_ordering(144) 00:17:15.958 fused_ordering(145) 00:17:15.958 fused_ordering(146) 00:17:15.958 fused_ordering(147) 00:17:15.958 fused_ordering(148) 00:17:15.958 fused_ordering(149) 00:17:15.958 fused_ordering(150) 00:17:15.958 fused_ordering(151) 00:17:15.958 fused_ordering(152) 00:17:15.958 fused_ordering(153) 00:17:15.958 fused_ordering(154) 00:17:15.958 fused_ordering(155) 00:17:15.958 fused_ordering(156) 00:17:15.958 fused_ordering(157) 00:17:15.958 fused_ordering(158) 00:17:15.958 fused_ordering(159) 00:17:15.958 fused_ordering(160) 00:17:15.958 fused_ordering(161) 00:17:15.958 fused_ordering(162) 00:17:15.958 fused_ordering(163) 00:17:15.958 fused_ordering(164) 00:17:15.958 fused_ordering(165) 00:17:15.958 fused_ordering(166) 00:17:15.958 fused_ordering(167) 00:17:15.958 fused_ordering(168) 00:17:15.958 fused_ordering(169) 00:17:15.958 fused_ordering(170) 00:17:15.958 fused_ordering(171) 00:17:15.958 fused_ordering(172) 00:17:15.958 fused_ordering(173) 00:17:15.958 fused_ordering(174) 00:17:15.958 fused_ordering(175) 00:17:15.958 fused_ordering(176) 00:17:15.958 fused_ordering(177) 00:17:15.958 fused_ordering(178) 00:17:15.958 fused_ordering(179) 00:17:15.958 fused_ordering(180) 00:17:15.958 fused_ordering(181) 00:17:15.958 fused_ordering(182) 00:17:15.958 fused_ordering(183) 00:17:15.958 fused_ordering(184) 00:17:15.958 fused_ordering(185) 00:17:15.958 fused_ordering(186) 00:17:15.958 fused_ordering(187) 00:17:15.958 fused_ordering(188) 00:17:15.958 fused_ordering(189) 00:17:15.958 fused_ordering(190) 00:17:15.958 fused_ordering(191) 00:17:15.958 fused_ordering(192) 00:17:15.958 fused_ordering(193) 00:17:15.958 fused_ordering(194) 00:17:15.958 fused_ordering(195) 00:17:15.958 fused_ordering(196) 00:17:15.958 fused_ordering(197) 00:17:15.958 fused_ordering(198) 00:17:15.958 fused_ordering(199) 00:17:15.958 fused_ordering(200) 00:17:15.958 fused_ordering(201) 00:17:15.958 fused_ordering(202) 00:17:15.958 fused_ordering(203) 00:17:15.958 fused_ordering(204) 00:17:15.958 fused_ordering(205) 00:17:15.958 fused_ordering(206) 00:17:15.958 fused_ordering(207) 00:17:15.958 fused_ordering(208) 00:17:15.958 fused_ordering(209) 00:17:15.958 fused_ordering(210) 00:17:15.958 fused_ordering(211) 00:17:15.958 fused_ordering(212) 00:17:15.958 fused_ordering(213) 00:17:15.958 fused_ordering(214) 00:17:15.958 fused_ordering(215) 00:17:15.958 fused_ordering(216) 00:17:15.958 fused_ordering(217) 00:17:15.958 fused_ordering(218) 00:17:15.958 fused_ordering(219) 00:17:15.958 fused_ordering(220) 00:17:15.958 fused_ordering(221) 00:17:15.958 fused_ordering(222) 00:17:15.958 fused_ordering(223) 00:17:15.958 fused_ordering(224) 00:17:15.958 fused_ordering(225) 00:17:15.958 fused_ordering(226) 00:17:15.958 fused_ordering(227) 00:17:15.958 fused_ordering(228) 00:17:15.958 fused_ordering(229) 00:17:15.958 fused_ordering(230) 00:17:15.958 fused_ordering(231) 00:17:15.958 fused_ordering(232) 00:17:15.958 fused_ordering(233) 00:17:15.958 fused_ordering(234) 00:17:15.958 fused_ordering(235) 00:17:15.958 fused_ordering(236) 00:17:15.958 fused_ordering(237) 00:17:15.958 fused_ordering(238) 00:17:15.958 fused_ordering(239) 00:17:15.958 fused_ordering(240) 00:17:15.958 fused_ordering(241) 00:17:15.958 fused_ordering(242) 00:17:15.958 fused_ordering(243) 00:17:15.958 fused_ordering(244) 00:17:15.958 fused_ordering(245) 00:17:15.958 fused_ordering(246) 00:17:15.958 fused_ordering(247) 00:17:15.958 fused_ordering(248) 00:17:15.958 fused_ordering(249) 00:17:15.958 fused_ordering(250) 00:17:15.958 fused_ordering(251) 00:17:15.958 fused_ordering(252) 00:17:15.958 fused_ordering(253) 00:17:15.958 fused_ordering(254) 00:17:15.958 fused_ordering(255) 00:17:15.958 fused_ordering(256) 00:17:15.958 fused_ordering(257) 00:17:15.958 fused_ordering(258) 00:17:15.958 fused_ordering(259) 00:17:15.958 fused_ordering(260) 00:17:15.958 fused_ordering(261) 00:17:15.958 fused_ordering(262) 00:17:15.958 fused_ordering(263) 00:17:15.958 fused_ordering(264) 00:17:15.958 fused_ordering(265) 00:17:15.958 fused_ordering(266) 00:17:15.958 fused_ordering(267) 00:17:15.958 fused_ordering(268) 00:17:15.958 fused_ordering(269) 00:17:15.958 fused_ordering(270) 00:17:15.958 fused_ordering(271) 00:17:15.958 fused_ordering(272) 00:17:15.958 fused_ordering(273) 00:17:15.958 fused_ordering(274) 00:17:15.958 fused_ordering(275) 00:17:15.958 fused_ordering(276) 00:17:15.958 fused_ordering(277) 00:17:15.958 fused_ordering(278) 00:17:15.958 fused_ordering(279) 00:17:15.958 fused_ordering(280) 00:17:15.958 fused_ordering(281) 00:17:15.958 fused_ordering(282) 00:17:15.958 fused_ordering(283) 00:17:15.958 fused_ordering(284) 00:17:15.958 fused_ordering(285) 00:17:15.958 fused_ordering(286) 00:17:15.958 fused_ordering(287) 00:17:15.958 fused_ordering(288) 00:17:15.958 fused_ordering(289) 00:17:15.958 fused_ordering(290) 00:17:15.958 fused_ordering(291) 00:17:15.958 fused_ordering(292) 00:17:15.958 fused_ordering(293) 00:17:15.958 fused_ordering(294) 00:17:15.958 fused_ordering(295) 00:17:15.958 fused_ordering(296) 00:17:15.958 fused_ordering(297) 00:17:15.958 fused_ordering(298) 00:17:15.958 fused_ordering(299) 00:17:15.958 fused_ordering(300) 00:17:15.958 fused_ordering(301) 00:17:15.958 fused_ordering(302) 00:17:15.958 fused_ordering(303) 00:17:15.958 fused_ordering(304) 00:17:15.958 fused_ordering(305) 00:17:15.958 fused_ordering(306) 00:17:15.958 fused_ordering(307) 00:17:15.958 fused_ordering(308) 00:17:15.958 fused_ordering(309) 00:17:15.958 fused_ordering(310) 00:17:15.958 fused_ordering(311) 00:17:15.958 fused_ordering(312) 00:17:15.958 fused_ordering(313) 00:17:15.958 fused_ordering(314) 00:17:15.958 fused_ordering(315) 00:17:15.958 fused_ordering(316) 00:17:15.958 fused_ordering(317) 00:17:15.958 fused_ordering(318) 00:17:15.958 fused_ordering(319) 00:17:15.958 fused_ordering(320) 00:17:15.958 fused_ordering(321) 00:17:15.958 fused_ordering(322) 00:17:15.958 fused_ordering(323) 00:17:15.958 fused_ordering(324) 00:17:15.958 fused_ordering(325) 00:17:15.958 fused_ordering(326) 00:17:15.958 fused_ordering(327) 00:17:15.958 fused_ordering(328) 00:17:15.958 fused_ordering(329) 00:17:15.958 fused_ordering(330) 00:17:15.958 fused_ordering(331) 00:17:15.958 fused_ordering(332) 00:17:15.958 fused_ordering(333) 00:17:15.958 fused_ordering(334) 00:17:15.958 fused_ordering(335) 00:17:15.958 fused_ordering(336) 00:17:15.958 fused_ordering(337) 00:17:15.958 fused_ordering(338) 00:17:15.958 fused_ordering(339) 00:17:15.958 fused_ordering(340) 00:17:15.958 fused_ordering(341) 00:17:15.958 fused_ordering(342) 00:17:15.958 fused_ordering(343) 00:17:15.958 fused_ordering(344) 00:17:15.958 fused_ordering(345) 00:17:15.958 fused_ordering(346) 00:17:15.958 fused_ordering(347) 00:17:15.958 fused_ordering(348) 00:17:15.958 fused_ordering(349) 00:17:15.958 fused_ordering(350) 00:17:15.958 fused_ordering(351) 00:17:15.958 fused_ordering(352) 00:17:15.958 fused_ordering(353) 00:17:15.958 fused_ordering(354) 00:17:15.958 fused_ordering(355) 00:17:15.958 fused_ordering(356) 00:17:15.958 fused_ordering(357) 00:17:15.958 fused_ordering(358) 00:17:15.958 fused_ordering(359) 00:17:15.958 fused_ordering(360) 00:17:15.958 fused_ordering(361) 00:17:15.958 fused_ordering(362) 00:17:15.958 fused_ordering(363) 00:17:15.958 fused_ordering(364) 00:17:15.958 fused_ordering(365) 00:17:15.958 fused_ordering(366) 00:17:15.958 fused_ordering(367) 00:17:15.958 fused_ordering(368) 00:17:15.958 fused_ordering(369) 00:17:15.958 fused_ordering(370) 00:17:15.958 fused_ordering(371) 00:17:15.958 fused_ordering(372) 00:17:15.958 fused_ordering(373) 00:17:15.958 fused_ordering(374) 00:17:15.958 fused_ordering(375) 00:17:15.958 fused_ordering(376) 00:17:15.958 fused_ordering(377) 00:17:15.958 fused_ordering(378) 00:17:15.958 fused_ordering(379) 00:17:15.958 fused_ordering(380) 00:17:15.958 fused_ordering(381) 00:17:15.958 fused_ordering(382) 00:17:15.958 fused_ordering(383) 00:17:15.958 fused_ordering(384) 00:17:15.958 fused_ordering(385) 00:17:15.958 fused_ordering(386) 00:17:15.958 fused_ordering(387) 00:17:15.958 fused_ordering(388) 00:17:15.958 fused_ordering(389) 00:17:15.958 fused_ordering(390) 00:17:15.958 fused_ordering(391) 00:17:15.958 fused_ordering(392) 00:17:15.958 fused_ordering(393) 00:17:15.958 fused_ordering(394) 00:17:15.958 fused_ordering(395) 00:17:15.958 fused_ordering(396) 00:17:15.958 fused_ordering(397) 00:17:15.958 fused_ordering(398) 00:17:15.958 fused_ordering(399) 00:17:15.958 fused_ordering(400) 00:17:15.958 fused_ordering(401) 00:17:15.958 fused_ordering(402) 00:17:15.958 fused_ordering(403) 00:17:15.959 fused_ordering(404) 00:17:15.959 fused_ordering(405) 00:17:15.959 fused_ordering(406) 00:17:15.959 fused_ordering(407) 00:17:15.959 fused_ordering(408) 00:17:15.959 fused_ordering(409) 00:17:15.959 fused_ordering(410) 00:17:15.959 fused_ordering(411) 00:17:15.959 fused_ordering(412) 00:17:15.959 fused_ordering(413) 00:17:15.959 fused_ordering(414) 00:17:15.959 fused_ordering(415) 00:17:15.959 fused_ordering(416) 00:17:15.959 fused_ordering(417) 00:17:15.959 fused_ordering(418) 00:17:15.959 fused_ordering(419) 00:17:15.959 fused_ordering(420) 00:17:15.959 fused_ordering(421) 00:17:15.959 fused_ordering(422) 00:17:15.959 fused_ordering(423) 00:17:15.959 fused_ordering(424) 00:17:15.959 fused_ordering(425) 00:17:15.959 fused_ordering(426) 00:17:15.959 fused_ordering(427) 00:17:15.959 fused_ordering(428) 00:17:15.959 fused_ordering(429) 00:17:15.959 fused_ordering(430) 00:17:15.959 fused_ordering(431) 00:17:15.959 fused_ordering(432) 00:17:15.959 fused_ordering(433) 00:17:15.959 fused_ordering(434) 00:17:15.959 fused_ordering(435) 00:17:15.959 fused_ordering(436) 00:17:15.959 fused_ordering(437) 00:17:15.959 fused_ordering(438) 00:17:15.959 fused_ordering(439) 00:17:15.959 fused_ordering(440) 00:17:15.959 fused_ordering(441) 00:17:15.959 fused_ordering(442) 00:17:15.959 fused_ordering(443) 00:17:15.959 fused_ordering(444) 00:17:15.959 fused_ordering(445) 00:17:15.959 fused_ordering(446) 00:17:15.959 fused_ordering(447) 00:17:15.959 fused_ordering(448) 00:17:15.959 fused_ordering(449) 00:17:15.959 fused_ordering(450) 00:17:15.959 fused_ordering(451) 00:17:15.959 fused_ordering(452) 00:17:15.959 fused_ordering(453) 00:17:15.959 fused_ordering(454) 00:17:15.959 fused_ordering(455) 00:17:15.959 fused_ordering(456) 00:17:15.959 fused_ordering(457) 00:17:15.959 fused_ordering(458) 00:17:15.959 fused_ordering(459) 00:17:15.959 fused_ordering(460) 00:17:15.959 fused_ordering(461) 00:17:15.959 fused_ordering(462) 00:17:15.959 fused_ordering(463) 00:17:15.959 fused_ordering(464) 00:17:15.959 fused_ordering(465) 00:17:15.959 fused_ordering(466) 00:17:15.959 fused_ordering(467) 00:17:15.959 fused_ordering(468) 00:17:15.959 fused_ordering(469) 00:17:15.959 fused_ordering(470) 00:17:15.959 fused_ordering(471) 00:17:15.959 fused_ordering(472) 00:17:15.959 fused_ordering(473) 00:17:15.959 fused_ordering(474) 00:17:15.959 fused_ordering(475) 00:17:15.959 fused_ordering(476) 00:17:15.959 fused_ordering(477) 00:17:15.959 fused_ordering(478) 00:17:15.959 fused_ordering(479) 00:17:15.959 fused_ordering(480) 00:17:15.959 fused_ordering(481) 00:17:15.959 fused_ordering(482) 00:17:15.959 fused_ordering(483) 00:17:15.959 fused_ordering(484) 00:17:15.959 fused_ordering(485) 00:17:15.959 fused_ordering(486) 00:17:15.959 fused_ordering(487) 00:17:15.959 fused_ordering(488) 00:17:15.959 fused_ordering(489) 00:17:15.959 fused_ordering(490) 00:17:15.959 fused_ordering(491) 00:17:15.959 fused_ordering(492) 00:17:15.959 fused_ordering(493) 00:17:15.959 fused_ordering(494) 00:17:15.959 fused_ordering(495) 00:17:15.959 fused_ordering(496) 00:17:15.959 fused_ordering(497) 00:17:15.959 fused_ordering(498) 00:17:15.959 fused_ordering(499) 00:17:15.959 fused_ordering(500) 00:17:15.959 fused_ordering(501) 00:17:15.959 fused_ordering(502) 00:17:15.959 fused_ordering(503) 00:17:15.959 fused_ordering(504) 00:17:15.959 fused_ordering(505) 00:17:15.959 fused_ordering(506) 00:17:15.959 fused_ordering(507) 00:17:15.959 fused_ordering(508) 00:17:15.959 fused_ordering(509) 00:17:15.959 fused_ordering(510) 00:17:15.959 fused_ordering(511) 00:17:15.959 fused_ordering(512) 00:17:15.959 fused_ordering(513) 00:17:15.959 fused_ordering(514) 00:17:15.959 fused_ordering(515) 00:17:15.959 fused_ordering(516) 00:17:15.959 fused_ordering(517) 00:17:15.959 fused_ordering(518) 00:17:15.959 fused_ordering(519) 00:17:15.959 fused_ordering(520) 00:17:15.959 fused_ordering(521) 00:17:15.959 fused_ordering(522) 00:17:15.959 fused_ordering(523) 00:17:15.959 fused_ordering(524) 00:17:15.959 fused_ordering(525) 00:17:15.959 fused_ordering(526) 00:17:15.959 fused_ordering(527) 00:17:15.959 fused_ordering(528) 00:17:15.959 fused_ordering(529) 00:17:15.959 fused_ordering(530) 00:17:15.959 fused_ordering(531) 00:17:15.959 fused_ordering(532) 00:17:15.959 fused_ordering(533) 00:17:15.959 fused_ordering(534) 00:17:15.959 fused_ordering(535) 00:17:15.959 fused_ordering(536) 00:17:15.959 fused_ordering(537) 00:17:15.959 fused_ordering(538) 00:17:15.959 fused_ordering(539) 00:17:15.959 fused_ordering(540) 00:17:15.959 fused_ordering(541) 00:17:15.959 fused_ordering(542) 00:17:15.959 fused_ordering(543) 00:17:15.959 fused_ordering(544) 00:17:15.959 fused_ordering(545) 00:17:15.959 fused_ordering(546) 00:17:15.959 fused_ordering(547) 00:17:15.959 fused_ordering(548) 00:17:15.959 fused_ordering(549) 00:17:15.959 fused_ordering(550) 00:17:15.959 fused_ordering(551) 00:17:15.959 fused_ordering(552) 00:17:15.959 fused_ordering(553) 00:17:15.959 fused_ordering(554) 00:17:15.959 fused_ordering(555) 00:17:15.959 fused_ordering(556) 00:17:15.959 fused_ordering(557) 00:17:15.959 fused_ordering(558) 00:17:15.959 fused_ordering(559) 00:17:15.959 fused_ordering(560) 00:17:15.959 fused_ordering(561) 00:17:15.959 fused_ordering(562) 00:17:15.959 fused_ordering(563) 00:17:15.959 fused_ordering(564) 00:17:15.959 fused_ordering(565) 00:17:15.959 fused_ordering(566) 00:17:15.959 fused_ordering(567) 00:17:15.959 fused_ordering(568) 00:17:15.959 fused_ordering(569) 00:17:15.959 fused_ordering(570) 00:17:15.959 fused_ordering(571) 00:17:15.959 fused_ordering(572) 00:17:15.959 fused_ordering(573) 00:17:15.959 fused_ordering(574) 00:17:15.959 fused_ordering(575) 00:17:15.959 fused_ordering(576) 00:17:15.959 fused_ordering(577) 00:17:15.959 fused_ordering(578) 00:17:15.959 fused_ordering(579) 00:17:15.959 fused_ordering(580) 00:17:15.959 fused_ordering(581) 00:17:15.959 fused_ordering(582) 00:17:15.959 fused_ordering(583) 00:17:15.959 fused_ordering(584) 00:17:15.959 fused_ordering(585) 00:17:15.959 fused_ordering(586) 00:17:15.959 fused_ordering(587) 00:17:15.959 fused_ordering(588) 00:17:15.959 fused_ordering(589) 00:17:15.959 fused_ordering(590) 00:17:15.959 fused_ordering(591) 00:17:15.959 fused_ordering(592) 00:17:15.959 fused_ordering(593) 00:17:15.959 fused_ordering(594) 00:17:15.959 fused_ordering(595) 00:17:15.959 fused_ordering(596) 00:17:15.959 fused_ordering(597) 00:17:15.959 fused_ordering(598) 00:17:15.959 fused_ordering(599) 00:17:15.959 fused_ordering(600) 00:17:15.959 fused_ordering(601) 00:17:15.959 fused_ordering(602) 00:17:15.959 fused_ordering(603) 00:17:15.959 fused_ordering(604) 00:17:15.959 fused_ordering(605) 00:17:15.959 fused_ordering(606) 00:17:15.959 fused_ordering(607) 00:17:15.959 fused_ordering(608) 00:17:15.959 fused_ordering(609) 00:17:15.959 fused_ordering(610) 00:17:15.959 fused_ordering(611) 00:17:15.959 fused_ordering(612) 00:17:15.959 fused_ordering(613) 00:17:15.959 fused_ordering(614) 00:17:15.959 fused_ordering(615) 00:17:16.219 fused_ordering(616) 00:17:16.219 fused_ordering(617) 00:17:16.219 fused_ordering(618) 00:17:16.219 fused_ordering(619) 00:17:16.219 fused_ordering(620) 00:17:16.219 fused_ordering(621) 00:17:16.219 fused_ordering(622) 00:17:16.219 fused_ordering(623) 00:17:16.219 fused_ordering(624) 00:17:16.219 fused_ordering(625) 00:17:16.219 fused_ordering(626) 00:17:16.219 fused_ordering(627) 00:17:16.219 fused_ordering(628) 00:17:16.219 fused_ordering(629) 00:17:16.219 fused_ordering(630) 00:17:16.219 fused_ordering(631) 00:17:16.219 fused_ordering(632) 00:17:16.219 fused_ordering(633) 00:17:16.219 fused_ordering(634) 00:17:16.219 fused_ordering(635) 00:17:16.219 fused_ordering(636) 00:17:16.219 fused_ordering(637) 00:17:16.219 fused_ordering(638) 00:17:16.219 fused_ordering(639) 00:17:16.219 fused_ordering(640) 00:17:16.219 fused_ordering(641) 00:17:16.219 fused_ordering(642) 00:17:16.219 fused_ordering(643) 00:17:16.219 fused_ordering(644) 00:17:16.219 fused_ordering(645) 00:17:16.219 fused_ordering(646) 00:17:16.219 fused_ordering(647) 00:17:16.219 fused_ordering(648) 00:17:16.219 fused_ordering(649) 00:17:16.219 fused_ordering(650) 00:17:16.219 fused_ordering(651) 00:17:16.219 fused_ordering(652) 00:17:16.219 fused_ordering(653) 00:17:16.219 fused_ordering(654) 00:17:16.219 fused_ordering(655) 00:17:16.219 fused_ordering(656) 00:17:16.219 fused_ordering(657) 00:17:16.219 fused_ordering(658) 00:17:16.219 fused_ordering(659) 00:17:16.219 fused_ordering(660) 00:17:16.219 fused_ordering(661) 00:17:16.219 fused_ordering(662) 00:17:16.219 fused_ordering(663) 00:17:16.219 fused_ordering(664) 00:17:16.219 fused_ordering(665) 00:17:16.219 fused_ordering(666) 00:17:16.219 fused_ordering(667) 00:17:16.219 fused_ordering(668) 00:17:16.219 fused_ordering(669) 00:17:16.219 fused_ordering(670) 00:17:16.219 fused_ordering(671) 00:17:16.219 fused_ordering(672) 00:17:16.219 fused_ordering(673) 00:17:16.219 fused_ordering(674) 00:17:16.219 fused_ordering(675) 00:17:16.219 fused_ordering(676) 00:17:16.219 fused_ordering(677) 00:17:16.219 fused_ordering(678) 00:17:16.219 fused_ordering(679) 00:17:16.219 fused_ordering(680) 00:17:16.219 fused_ordering(681) 00:17:16.219 fused_ordering(682) 00:17:16.219 fused_ordering(683) 00:17:16.219 fused_ordering(684) 00:17:16.219 fused_ordering(685) 00:17:16.219 fused_ordering(686) 00:17:16.219 fused_ordering(687) 00:17:16.219 fused_ordering(688) 00:17:16.219 fused_ordering(689) 00:17:16.219 fused_ordering(690) 00:17:16.219 fused_ordering(691) 00:17:16.219 fused_ordering(692) 00:17:16.219 fused_ordering(693) 00:17:16.219 fused_ordering(694) 00:17:16.219 fused_ordering(695) 00:17:16.219 fused_ordering(696) 00:17:16.219 fused_ordering(697) 00:17:16.219 fused_ordering(698) 00:17:16.219 fused_ordering(699) 00:17:16.219 fused_ordering(700) 00:17:16.219 fused_ordering(701) 00:17:16.219 fused_ordering(702) 00:17:16.219 fused_ordering(703) 00:17:16.219 fused_ordering(704) 00:17:16.219 fused_ordering(705) 00:17:16.219 fused_ordering(706) 00:17:16.219 fused_ordering(707) 00:17:16.219 fused_ordering(708) 00:17:16.219 fused_ordering(709) 00:17:16.219 fused_ordering(710) 00:17:16.219 fused_ordering(711) 00:17:16.219 fused_ordering(712) 00:17:16.219 fused_ordering(713) 00:17:16.219 fused_ordering(714) 00:17:16.219 fused_ordering(715) 00:17:16.219 fused_ordering(716) 00:17:16.219 fused_ordering(717) 00:17:16.219 fused_ordering(718) 00:17:16.219 fused_ordering(719) 00:17:16.219 fused_ordering(720) 00:17:16.219 fused_ordering(721) 00:17:16.219 fused_ordering(722) 00:17:16.219 fused_ordering(723) 00:17:16.219 fused_ordering(724) 00:17:16.219 fused_ordering(725) 00:17:16.219 fused_ordering(726) 00:17:16.219 fused_ordering(727) 00:17:16.219 fused_ordering(728) 00:17:16.219 fused_ordering(729) 00:17:16.219 fused_ordering(730) 00:17:16.219 fused_ordering(731) 00:17:16.219 fused_ordering(732) 00:17:16.219 fused_ordering(733) 00:17:16.219 fused_ordering(734) 00:17:16.219 fused_ordering(735) 00:17:16.219 fused_ordering(736) 00:17:16.219 fused_ordering(737) 00:17:16.219 fused_ordering(738) 00:17:16.219 fused_ordering(739) 00:17:16.219 fused_ordering(740) 00:17:16.220 fused_ordering(741) 00:17:16.220 fused_ordering(742) 00:17:16.220 fused_ordering(743) 00:17:16.220 fused_ordering(744) 00:17:16.220 fused_ordering(745) 00:17:16.220 fused_ordering(746) 00:17:16.220 fused_ordering(747) 00:17:16.220 fused_ordering(748) 00:17:16.220 fused_ordering(749) 00:17:16.220 fused_ordering(750) 00:17:16.220 fused_ordering(751) 00:17:16.220 fused_ordering(752) 00:17:16.220 fused_ordering(753) 00:17:16.220 fused_ordering(754) 00:17:16.220 fused_ordering(755) 00:17:16.220 fused_ordering(756) 00:17:16.220 fused_ordering(757) 00:17:16.220 fused_ordering(758) 00:17:16.220 fused_ordering(759) 00:17:16.220 fused_ordering(760) 00:17:16.220 fused_ordering(761) 00:17:16.220 fused_ordering(762) 00:17:16.220 fused_ordering(763) 00:17:16.220 fused_ordering(764) 00:17:16.220 fused_ordering(765) 00:17:16.220 fused_ordering(766) 00:17:16.220 fused_ordering(767) 00:17:16.220 fused_ordering(768) 00:17:16.220 fused_ordering(769) 00:17:16.220 fused_ordering(770) 00:17:16.220 fused_ordering(771) 00:17:16.220 fused_ordering(772) 00:17:16.220 fused_ordering(773) 00:17:16.220 fused_ordering(774) 00:17:16.220 fused_ordering(775) 00:17:16.220 fused_ordering(776) 00:17:16.220 fused_ordering(777) 00:17:16.220 fused_ordering(778) 00:17:16.220 fused_ordering(779) 00:17:16.220 fused_ordering(780) 00:17:16.220 fused_ordering(781) 00:17:16.220 fused_ordering(782) 00:17:16.220 fused_ordering(783) 00:17:16.220 fused_ordering(784) 00:17:16.220 fused_ordering(785) 00:17:16.220 fused_ordering(786) 00:17:16.220 fused_ordering(787) 00:17:16.220 fused_ordering(788) 00:17:16.220 fused_ordering(789) 00:17:16.220 fused_ordering(790) 00:17:16.220 fused_ordering(791) 00:17:16.220 fused_ordering(792) 00:17:16.220 fused_ordering(793) 00:17:16.220 fused_ordering(794) 00:17:16.220 fused_ordering(795) 00:17:16.220 fused_ordering(796) 00:17:16.220 fused_ordering(797) 00:17:16.220 fused_ordering(798) 00:17:16.220 fused_ordering(799) 00:17:16.220 fused_ordering(800) 00:17:16.220 fused_ordering(801) 00:17:16.220 fused_ordering(802) 00:17:16.220 fused_ordering(803) 00:17:16.220 fused_ordering(804) 00:17:16.220 fused_ordering(805) 00:17:16.220 fused_ordering(806) 00:17:16.220 fused_ordering(807) 00:17:16.220 fused_ordering(808) 00:17:16.220 fused_ordering(809) 00:17:16.220 fused_ordering(810) 00:17:16.220 fused_ordering(811) 00:17:16.220 fused_ordering(812) 00:17:16.220 fused_ordering(813) 00:17:16.220 fused_ordering(814) 00:17:16.220 fused_ordering(815) 00:17:16.220 fused_ordering(816) 00:17:16.220 fused_ordering(817) 00:17:16.220 fused_ordering(818) 00:17:16.220 fused_ordering(819) 00:17:16.220 fused_ordering(820) 00:17:16.220 fused_ordering(821) 00:17:16.220 fused_ordering(822) 00:17:16.220 fused_ordering(823) 00:17:16.220 fused_ordering(824) 00:17:16.220 fused_ordering(825) 00:17:16.220 fused_ordering(826) 00:17:16.220 fused_ordering(827) 00:17:16.220 fused_ordering(828) 00:17:16.220 fused_ordering(829) 00:17:16.220 fused_ordering(830) 00:17:16.220 fused_ordering(831) 00:17:16.220 fused_ordering(832) 00:17:16.220 fused_ordering(833) 00:17:16.220 fused_ordering(834) 00:17:16.220 fused_ordering(835) 00:17:16.220 fused_ordering(836) 00:17:16.220 fused_ordering(837) 00:17:16.220 fused_ordering(838) 00:17:16.220 fused_ordering(839) 00:17:16.220 fused_ordering(840) 00:17:16.220 fused_ordering(841) 00:17:16.220 fused_ordering(842) 00:17:16.220 fused_ordering(843) 00:17:16.220 fused_ordering(844) 00:17:16.220 fused_ordering(845) 00:17:16.220 fused_ordering(846) 00:17:16.220 fused_ordering(847) 00:17:16.220 fused_ordering(848) 00:17:16.220 fused_ordering(849) 00:17:16.220 fused_ordering(850) 00:17:16.220 fused_ordering(851) 00:17:16.220 fused_ordering(852) 00:17:16.220 fused_ordering(853) 00:17:16.220 fused_ordering(854) 00:17:16.220 fused_ordering(855) 00:17:16.220 fused_ordering(856) 00:17:16.220 fused_ordering(857) 00:17:16.220 fused_ordering(858) 00:17:16.220 fused_ordering(859) 00:17:16.220 fused_ordering(860) 00:17:16.220 fused_ordering(861) 00:17:16.220 fused_ordering(862) 00:17:16.220 fused_ordering(863) 00:17:16.220 fused_ordering(864) 00:17:16.220 fused_ordering(865) 00:17:16.220 fused_ordering(866) 00:17:16.220 fused_ordering(867) 00:17:16.220 fused_ordering(868) 00:17:16.220 fused_ordering(869) 00:17:16.220 fused_ordering(870) 00:17:16.220 fused_ordering(871) 00:17:16.220 fused_ordering(872) 00:17:16.220 fused_ordering(873) 00:17:16.220 fused_ordering(874) 00:17:16.220 fused_ordering(875) 00:17:16.220 fused_ordering(876) 00:17:16.220 fused_ordering(877) 00:17:16.220 fused_ordering(878) 00:17:16.220 fused_ordering(879) 00:17:16.220 fused_ordering(880) 00:17:16.220 fused_ordering(881) 00:17:16.220 fused_ordering(882) 00:17:16.220 fused_ordering(883) 00:17:16.220 fused_ordering(884) 00:17:16.220 fused_ordering(885) 00:17:16.220 fused_ordering(886) 00:17:16.220 fused_ordering(887) 00:17:16.220 fused_ordering(888) 00:17:16.220 fused_ordering(889) 00:17:16.220 fused_ordering(890) 00:17:16.220 fused_ordering(891) 00:17:16.220 fused_ordering(892) 00:17:16.220 fused_ordering(893) 00:17:16.220 fused_ordering(894) 00:17:16.220 fused_ordering(895) 00:17:16.220 fused_ordering(896) 00:17:16.220 fused_ordering(897) 00:17:16.220 fused_ordering(898) 00:17:16.220 fused_ordering(899) 00:17:16.220 fused_ordering(900) 00:17:16.220 fused_ordering(901) 00:17:16.220 fused_ordering(902) 00:17:16.220 fused_ordering(903) 00:17:16.220 fused_ordering(904) 00:17:16.220 fused_ordering(905) 00:17:16.220 fused_ordering(906) 00:17:16.220 fused_ordering(907) 00:17:16.220 fused_ordering(908) 00:17:16.220 fused_ordering(909) 00:17:16.220 fused_ordering(910) 00:17:16.220 fused_ordering(911) 00:17:16.220 fused_ordering(912) 00:17:16.220 fused_ordering(913) 00:17:16.220 fused_ordering(914) 00:17:16.220 fused_ordering(915) 00:17:16.220 fused_ordering(916) 00:17:16.220 fused_ordering(917) 00:17:16.220 fused_ordering(918) 00:17:16.220 fused_ordering(919) 00:17:16.220 fused_ordering(920) 00:17:16.220 fused_ordering(921) 00:17:16.220 fused_ordering(922) 00:17:16.220 fused_ordering(923) 00:17:16.220 fused_ordering(924) 00:17:16.220 fused_ordering(925) 00:17:16.220 fused_ordering(926) 00:17:16.220 fused_ordering(927) 00:17:16.220 fused_ordering(928) 00:17:16.220 fused_ordering(929) 00:17:16.220 fused_ordering(930) 00:17:16.220 fused_ordering(931) 00:17:16.220 fused_ordering(932) 00:17:16.220 fused_ordering(933) 00:17:16.220 fused_ordering(934) 00:17:16.220 fused_ordering(935) 00:17:16.220 fused_ordering(936) 00:17:16.220 fused_ordering(937) 00:17:16.220 fused_ordering(938) 00:17:16.220 fused_ordering(939) 00:17:16.220 fused_ordering(940) 00:17:16.220 fused_ordering(941) 00:17:16.220 fused_ordering(942) 00:17:16.220 fused_ordering(943) 00:17:16.220 fused_ordering(944) 00:17:16.220 fused_ordering(945) 00:17:16.220 fused_ordering(946) 00:17:16.220 fused_ordering(947) 00:17:16.220 fused_ordering(948) 00:17:16.220 fused_ordering(949) 00:17:16.220 fused_ordering(950) 00:17:16.220 fused_ordering(951) 00:17:16.220 fused_ordering(952) 00:17:16.220 fused_ordering(953) 00:17:16.220 fused_ordering(954) 00:17:16.220 fused_ordering(955) 00:17:16.220 fused_ordering(956) 00:17:16.220 fused_ordering(957) 00:17:16.220 fused_ordering(958) 00:17:16.220 fused_ordering(959) 00:17:16.220 fused_ordering(960) 00:17:16.220 fused_ordering(961) 00:17:16.220 fused_ordering(962) 00:17:16.220 fused_ordering(963) 00:17:16.220 fused_ordering(964) 00:17:16.220 fused_ordering(965) 00:17:16.220 fused_ordering(966) 00:17:16.220 fused_ordering(967) 00:17:16.220 fused_ordering(968) 00:17:16.220 fused_ordering(969) 00:17:16.220 fused_ordering(970) 00:17:16.220 fused_ordering(971) 00:17:16.220 fused_ordering(972) 00:17:16.220 fused_ordering(973) 00:17:16.220 fused_ordering(974) 00:17:16.220 fused_ordering(975) 00:17:16.220 fused_ordering(976) 00:17:16.220 fused_ordering(977) 00:17:16.220 fused_ordering(978) 00:17:16.220 fused_ordering(979) 00:17:16.220 fused_ordering(980) 00:17:16.220 fused_ordering(981) 00:17:16.220 fused_ordering(982) 00:17:16.220 fused_ordering(983) 00:17:16.220 fused_ordering(984) 00:17:16.220 fused_ordering(985) 00:17:16.220 fused_ordering(986) 00:17:16.220 fused_ordering(987) 00:17:16.220 fused_ordering(988) 00:17:16.220 fused_ordering(989) 00:17:16.220 fused_ordering(990) 00:17:16.220 fused_ordering(991) 00:17:16.220 fused_ordering(992) 00:17:16.220 fused_ordering(993) 00:17:16.220 fused_ordering(994) 00:17:16.220 fused_ordering(995) 00:17:16.220 fused_ordering(996) 00:17:16.220 fused_ordering(997) 00:17:16.220 fused_ordering(998) 00:17:16.220 fused_ordering(999) 00:17:16.220 fused_ordering(1000) 00:17:16.220 fused_ordering(1001) 00:17:16.220 fused_ordering(1002) 00:17:16.220 fused_ordering(1003) 00:17:16.220 fused_ordering(1004) 00:17:16.220 fused_ordering(1005) 00:17:16.220 fused_ordering(1006) 00:17:16.220 fused_ordering(1007) 00:17:16.220 fused_ordering(1008) 00:17:16.220 fused_ordering(1009) 00:17:16.220 fused_ordering(1010) 00:17:16.220 fused_ordering(1011) 00:17:16.220 fused_ordering(1012) 00:17:16.220 fused_ordering(1013) 00:17:16.220 fused_ordering(1014) 00:17:16.220 fused_ordering(1015) 00:17:16.221 fused_ordering(1016) 00:17:16.221 fused_ordering(1017) 00:17:16.221 fused_ordering(1018) 00:17:16.221 fused_ordering(1019) 00:17:16.221 fused_ordering(1020) 00:17:16.221 fused_ordering(1021) 00:17:16.221 fused_ordering(1022) 00:17:16.221 fused_ordering(1023) 00:17:16.221 17:20:12 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:16.221 17:20:12 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:16.221 17:20:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:16.221 17:20:12 -- nvmf/common.sh@116 -- # sync 00:17:16.221 17:20:12 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:16.221 17:20:12 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:16.221 17:20:12 -- nvmf/common.sh@119 -- # set +e 00:17:16.221 17:20:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:16.221 17:20:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:16.221 rmmod nvme_rdma 00:17:16.480 rmmod nvme_fabrics 00:17:16.480 17:20:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:16.480 17:20:12 -- nvmf/common.sh@123 -- # set -e 00:17:16.480 17:20:12 -- nvmf/common.sh@124 -- # return 0 00:17:16.480 17:20:12 -- nvmf/common.sh@477 -- # '[' -n 1329347 ']' 00:17:16.480 17:20:12 -- nvmf/common.sh@478 -- # killprocess 1329347 00:17:16.480 17:20:12 -- common/autotest_common.sh@936 -- # '[' -z 1329347 ']' 00:17:16.480 17:20:12 -- common/autotest_common.sh@940 -- # kill -0 1329347 00:17:16.480 17:20:12 -- common/autotest_common.sh@941 -- # uname 00:17:16.480 17:20:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:16.480 17:20:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1329347 00:17:16.480 17:20:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:16.480 17:20:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:16.480 17:20:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1329347' 00:17:16.480 killing process with pid 1329347 00:17:16.480 17:20:13 -- common/autotest_common.sh@955 -- # kill 1329347 00:17:16.480 17:20:13 -- common/autotest_common.sh@960 -- # wait 1329347 00:17:16.740 17:20:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:16.740 17:20:13 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:16.740 00:17:16.740 real 0m9.183s 00:17:16.740 user 0m4.788s 00:17:16.740 sys 0m5.755s 00:17:16.740 17:20:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:16.740 17:20:13 -- common/autotest_common.sh@10 -- # set +x 00:17:16.740 ************************************ 00:17:16.740 END TEST nvmf_fused_ordering 00:17:16.740 ************************************ 00:17:16.740 17:20:13 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:17:16.740 17:20:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:16.740 17:20:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:16.740 17:20:13 -- common/autotest_common.sh@10 -- # set +x 00:17:16.740 ************************************ 00:17:16.740 START TEST nvmf_delete_subsystem 00:17:16.740 ************************************ 00:17:16.740 17:20:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:17:16.740 * Looking for test storage... 00:17:16.740 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:16.740 17:20:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:16.740 17:20:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:16.740 17:20:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:16.740 17:20:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:16.740 17:20:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:16.740 17:20:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:16.740 17:20:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:16.740 17:20:13 -- scripts/common.sh@335 -- # IFS=.-: 00:17:16.740 17:20:13 -- scripts/common.sh@335 -- # read -ra ver1 00:17:16.740 17:20:13 -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.740 17:20:13 -- scripts/common.sh@336 -- # read -ra ver2 00:17:16.740 17:20:13 -- scripts/common.sh@337 -- # local 'op=<' 00:17:16.740 17:20:13 -- scripts/common.sh@339 -- # ver1_l=2 00:17:16.740 17:20:13 -- scripts/common.sh@340 -- # ver2_l=1 00:17:16.740 17:20:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:16.740 17:20:13 -- scripts/common.sh@343 -- # case "$op" in 00:17:16.740 17:20:13 -- scripts/common.sh@344 -- # : 1 00:17:16.740 17:20:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:16.740 17:20:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.740 17:20:13 -- scripts/common.sh@364 -- # decimal 1 00:17:16.740 17:20:13 -- scripts/common.sh@352 -- # local d=1 00:17:16.740 17:20:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.740 17:20:13 -- scripts/common.sh@354 -- # echo 1 00:17:16.740 17:20:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:16.740 17:20:13 -- scripts/common.sh@365 -- # decimal 2 00:17:16.740 17:20:13 -- scripts/common.sh@352 -- # local d=2 00:17:16.740 17:20:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.740 17:20:13 -- scripts/common.sh@354 -- # echo 2 00:17:16.740 17:20:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:16.740 17:20:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:16.740 17:20:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:16.740 17:20:13 -- scripts/common.sh@367 -- # return 0 00:17:16.740 17:20:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.740 17:20:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:16.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.740 --rc genhtml_branch_coverage=1 00:17:16.740 --rc genhtml_function_coverage=1 00:17:16.740 --rc genhtml_legend=1 00:17:16.740 --rc geninfo_all_blocks=1 00:17:16.740 --rc geninfo_unexecuted_blocks=1 00:17:16.740 00:17:16.740 ' 00:17:16.740 17:20:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:16.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.740 --rc genhtml_branch_coverage=1 00:17:16.740 --rc genhtml_function_coverage=1 00:17:16.740 --rc genhtml_legend=1 00:17:16.740 --rc geninfo_all_blocks=1 00:17:16.740 --rc geninfo_unexecuted_blocks=1 00:17:16.740 00:17:16.740 ' 00:17:16.740 17:20:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:16.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.741 --rc genhtml_branch_coverage=1 00:17:16.741 --rc genhtml_function_coverage=1 00:17:16.741 --rc genhtml_legend=1 00:17:16.741 --rc geninfo_all_blocks=1 00:17:16.741 --rc geninfo_unexecuted_blocks=1 00:17:16.741 00:17:16.741 ' 00:17:16.741 17:20:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:16.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.741 --rc genhtml_branch_coverage=1 00:17:16.741 --rc genhtml_function_coverage=1 00:17:16.741 --rc genhtml_legend=1 00:17:16.741 --rc geninfo_all_blocks=1 00:17:16.741 --rc geninfo_unexecuted_blocks=1 00:17:16.741 00:17:16.741 ' 00:17:16.741 17:20:13 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.000 17:20:13 -- nvmf/common.sh@7 -- # uname -s 00:17:17.000 17:20:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.000 17:20:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.000 17:20:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.000 17:20:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.000 17:20:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.000 17:20:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.000 17:20:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.000 17:20:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.000 17:20:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.000 17:20:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.000 17:20:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:17.000 17:20:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:17.000 17:20:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.000 17:20:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.000 17:20:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.000 17:20:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:17.000 17:20:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.000 17:20:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.000 17:20:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.000 17:20:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.000 17:20:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.000 17:20:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.000 17:20:13 -- paths/export.sh@5 -- # export PATH 00:17:17.000 17:20:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.000 17:20:13 -- nvmf/common.sh@46 -- # : 0 00:17:17.000 17:20:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:17.000 17:20:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:17.000 17:20:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:17.000 17:20:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.000 17:20:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.000 17:20:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:17.000 17:20:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:17.000 17:20:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:17.000 17:20:13 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:17:17.000 17:20:13 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:17.001 17:20:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.001 17:20:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:17.001 17:20:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:17.001 17:20:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:17.001 17:20:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.001 17:20:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.001 17:20:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.001 17:20:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:17.001 17:20:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:17.001 17:20:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:17.001 17:20:13 -- common/autotest_common.sh@10 -- # set +x 00:17:23.574 17:20:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:23.574 17:20:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:23.574 17:20:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:23.574 17:20:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:23.574 17:20:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:23.574 17:20:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:23.574 17:20:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:23.574 17:20:19 -- nvmf/common.sh@294 -- # net_devs=() 00:17:23.574 17:20:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:23.574 17:20:19 -- nvmf/common.sh@295 -- # e810=() 00:17:23.574 17:20:19 -- nvmf/common.sh@295 -- # local -ga e810 00:17:23.574 17:20:19 -- nvmf/common.sh@296 -- # x722=() 00:17:23.574 17:20:19 -- nvmf/common.sh@296 -- # local -ga x722 00:17:23.574 17:20:19 -- nvmf/common.sh@297 -- # mlx=() 00:17:23.574 17:20:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:23.574 17:20:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:23.574 17:20:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:23.574 17:20:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:23.574 17:20:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:23.574 17:20:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:23.574 17:20:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:23.574 17:20:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:23.574 17:20:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:23.574 17:20:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:23.574 17:20:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:23.574 17:20:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:23.574 17:20:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:23.574 17:20:19 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:23.574 17:20:19 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:23.574 17:20:19 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:23.574 17:20:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:23.574 17:20:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:23.574 17:20:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:23.574 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:23.574 17:20:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:23.574 17:20:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:23.574 17:20:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:23.574 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:23.574 17:20:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:23.574 17:20:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:23.574 17:20:19 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:23.574 17:20:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:23.574 17:20:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.574 17:20:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:23.574 17:20:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.574 17:20:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:23.574 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:23.574 17:20:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.574 17:20:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:23.574 17:20:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.574 17:20:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:23.575 17:20:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.575 17:20:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:23.575 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:23.575 17:20:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.575 17:20:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:23.575 17:20:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:23.575 17:20:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:23.575 17:20:19 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:23.575 17:20:19 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:23.575 17:20:19 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:23.575 17:20:19 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:23.575 17:20:19 -- nvmf/common.sh@57 -- # uname 00:17:23.575 17:20:19 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:23.575 17:20:19 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:23.575 17:20:19 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:23.575 17:20:19 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:23.575 17:20:19 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:23.575 17:20:19 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:23.575 17:20:19 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:23.575 17:20:19 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:23.575 17:20:19 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:23.575 17:20:19 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:23.575 17:20:19 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:23.575 17:20:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:23.575 17:20:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:23.575 17:20:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:23.575 17:20:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:23.575 17:20:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:23.575 17:20:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:23.575 17:20:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.575 17:20:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:23.575 17:20:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:23.575 17:20:19 -- nvmf/common.sh@104 -- # continue 2 00:17:23.575 17:20:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:23.575 17:20:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.575 17:20:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:23.575 17:20:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.575 17:20:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:23.575 17:20:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:23.575 17:20:19 -- nvmf/common.sh@104 -- # continue 2 00:17:23.575 17:20:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:23.575 17:20:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:23.575 17:20:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:23.575 17:20:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:23.575 17:20:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:23.575 17:20:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:23.575 17:20:19 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:23.575 17:20:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:23.575 17:20:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:23.575 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:23.575 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:23.575 altname enp217s0f0np0 00:17:23.575 altname ens818f0np0 00:17:23.575 inet 192.168.100.8/24 scope global mlx_0_0 00:17:23.575 valid_lft forever preferred_lft forever 00:17:23.575 17:20:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:23.575 17:20:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:23.575 17:20:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:23.575 17:20:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:23.575 17:20:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:23.575 17:20:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:23.575 17:20:19 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:23.575 17:20:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:23.575 17:20:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:23.575 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:23.575 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:23.575 altname enp217s0f1np1 00:17:23.575 altname ens818f1np1 00:17:23.575 inet 192.168.100.9/24 scope global mlx_0_1 00:17:23.575 valid_lft forever preferred_lft forever 00:17:23.575 17:20:19 -- nvmf/common.sh@410 -- # return 0 00:17:23.575 17:20:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:23.575 17:20:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:23.575 17:20:19 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:23.575 17:20:19 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:23.575 17:20:19 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:23.575 17:20:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:23.575 17:20:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:23.575 17:20:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:23.575 17:20:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:23.575 17:20:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:23.575 17:20:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:23.575 17:20:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.575 17:20:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:23.575 17:20:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:23.575 17:20:19 -- nvmf/common.sh@104 -- # continue 2 00:17:23.575 17:20:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:23.575 17:20:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.575 17:20:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:23.575 17:20:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.575 17:20:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:23.575 17:20:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:23.575 17:20:19 -- nvmf/common.sh@104 -- # continue 2 00:17:23.575 17:20:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:23.575 17:20:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:23.575 17:20:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:23.575 17:20:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:23.575 17:20:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:23.575 17:20:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:23.575 17:20:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:23.575 17:20:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:23.575 17:20:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:23.575 17:20:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:23.575 17:20:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:23.575 17:20:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:23.575 17:20:19 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:23.575 192.168.100.9' 00:17:23.575 17:20:19 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:23.575 192.168.100.9' 00:17:23.575 17:20:19 -- nvmf/common.sh@445 -- # head -n 1 00:17:23.575 17:20:19 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:23.575 17:20:20 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:23.575 192.168.100.9' 00:17:23.575 17:20:20 -- nvmf/common.sh@446 -- # tail -n +2 00:17:23.575 17:20:20 -- nvmf/common.sh@446 -- # head -n 1 00:17:23.575 17:20:20 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:23.575 17:20:20 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:23.575 17:20:20 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:23.575 17:20:20 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:23.575 17:20:20 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:23.575 17:20:20 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:23.575 17:20:20 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:23.575 17:20:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:23.575 17:20:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:23.575 17:20:20 -- common/autotest_common.sh@10 -- # set +x 00:17:23.575 17:20:20 -- nvmf/common.sh@469 -- # nvmfpid=1332981 00:17:23.575 17:20:20 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:23.575 17:20:20 -- nvmf/common.sh@470 -- # waitforlisten 1332981 00:17:23.575 17:20:20 -- common/autotest_common.sh@829 -- # '[' -z 1332981 ']' 00:17:23.575 17:20:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.575 17:20:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.575 17:20:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.575 17:20:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.575 17:20:20 -- common/autotest_common.sh@10 -- # set +x 00:17:23.575 [2024-12-14 17:20:20.092964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:23.575 [2024-12-14 17:20:20.093022] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.575 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.575 [2024-12-14 17:20:20.162906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:23.575 [2024-12-14 17:20:20.199924] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:23.575 [2024-12-14 17:20:20.200034] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.575 [2024-12-14 17:20:20.200043] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.575 [2024-12-14 17:20:20.200051] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.575 [2024-12-14 17:20:20.200102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.575 [2024-12-14 17:20:20.200104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.512 17:20:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.512 17:20:20 -- common/autotest_common.sh@862 -- # return 0 00:17:24.512 17:20:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:24.512 17:20:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:24.512 17:20:20 -- common/autotest_common.sh@10 -- # set +x 00:17:24.512 17:20:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.512 17:20:20 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:24.512 17:20:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.512 17:20:20 -- common/autotest_common.sh@10 -- # set +x 00:17:24.512 [2024-12-14 17:20:20.980054] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1757b50/0x175c000) succeed. 00:17:24.513 [2024-12-14 17:20:20.989183] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1759000/0x179d6a0) succeed. 00:17:24.513 17:20:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.513 17:20:21 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:24.513 17:20:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.513 17:20:21 -- common/autotest_common.sh@10 -- # set +x 00:17:24.513 17:20:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.513 17:20:21 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:24.513 17:20:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.513 17:20:21 -- common/autotest_common.sh@10 -- # set +x 00:17:24.513 [2024-12-14 17:20:21.072041] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:24.513 17:20:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.513 17:20:21 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:24.513 17:20:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.513 17:20:21 -- common/autotest_common.sh@10 -- # set +x 00:17:24.513 NULL1 00:17:24.513 17:20:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.513 17:20:21 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:24.513 17:20:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.513 17:20:21 -- common/autotest_common.sh@10 -- # set +x 00:17:24.513 Delay0 00:17:24.513 17:20:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.513 17:20:21 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:24.513 17:20:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.513 17:20:21 -- common/autotest_common.sh@10 -- # set +x 00:17:24.513 17:20:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.513 17:20:21 -- target/delete_subsystem.sh@28 -- # perf_pid=1333103 00:17:24.513 17:20:21 -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:24.513 17:20:21 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:24.513 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.513 [2024-12-14 17:20:21.174838] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:27.047 17:20:23 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.047 17:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.047 17:20:23 -- common/autotest_common.sh@10 -- # set +x 00:17:27.761 NVMe io qpair process completion error 00:17:27.761 NVMe io qpair process completion error 00:17:27.761 NVMe io qpair process completion error 00:17:27.761 NVMe io qpair process completion error 00:17:27.761 NVMe io qpair process completion error 00:17:27.761 NVMe io qpair process completion error 00:17:27.761 17:20:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.761 17:20:24 -- target/delete_subsystem.sh@34 -- # delay=0 00:17:27.761 17:20:24 -- target/delete_subsystem.sh@35 -- # kill -0 1333103 00:17:27.761 17:20:24 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:28.330 17:20:24 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:28.330 17:20:24 -- target/delete_subsystem.sh@35 -- # kill -0 1333103 00:17:28.330 17:20:24 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Write completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Write completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Write completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Write completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Write completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Write completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Write completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.595 starting I/O failed: -6 00:17:28.595 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Read completed with error (sct=0, sc=8) 00:17:28.596 starting I/O failed: -6 00:17:28.596 Write completed with error (sct=0, sc=8) 00:17:28.597 starting I/O failed: -6 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 starting I/O failed: -6 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 starting I/O failed: -6 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 starting I/O failed: -6 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 starting I/O failed: -6 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 starting I/O failed: -6 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 starting I/O failed: -6 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 starting I/O failed: -6 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Write completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 Read completed with error (sct=0, sc=8) 00:17:28.597 [2024-12-14 17:20:25.259299] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:17:28.597 17:20:25 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:28.597 17:20:25 -- target/delete_subsystem.sh@35 -- # kill -0 1333103 00:17:28.597 17:20:25 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:28.597 [2024-12-14 17:20:25.273557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:28.597 [2024-12-14 17:20:25.273576] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:28.597 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:28.858 Initializing NVMe Controllers 00:17:28.858 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:28.858 Controller IO queue size 128, less than required. 00:17:28.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.858 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:28.858 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:28.858 Initialization complete. Launching workers. 00:17:28.858 ======================================================== 00:17:28.858 Latency(us) 00:17:28.858 Device Information : IOPS MiB/s Average min max 00:17:28.858 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.49 0.04 1593766.79 1000099.08 2975931.73 00:17:28.858 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.49 0.04 1595351.48 1001269.38 2976649.62 00:17:28.858 ======================================================== 00:17:28.858 Total : 160.98 0.08 1594559.14 1000099.08 2976649.62 00:17:28.858 00:17:29.117 17:20:25 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:29.117 17:20:25 -- target/delete_subsystem.sh@35 -- # kill -0 1333103 00:17:29.117 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1333103) - No such process 00:17:29.117 17:20:25 -- target/delete_subsystem.sh@45 -- # NOT wait 1333103 00:17:29.117 17:20:25 -- common/autotest_common.sh@650 -- # local es=0 00:17:29.117 17:20:25 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1333103 00:17:29.117 17:20:25 -- common/autotest_common.sh@638 -- # local arg=wait 00:17:29.117 17:20:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.117 17:20:25 -- common/autotest_common.sh@642 -- # type -t wait 00:17:29.117 17:20:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.117 17:20:25 -- common/autotest_common.sh@653 -- # wait 1333103 00:17:29.117 17:20:25 -- common/autotest_common.sh@653 -- # es=1 00:17:29.117 17:20:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:29.117 17:20:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:29.117 17:20:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:29.117 17:20:25 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:29.117 17:20:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.117 17:20:25 -- common/autotest_common.sh@10 -- # set +x 00:17:29.117 17:20:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.118 17:20:25 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:29.118 17:20:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.118 17:20:25 -- common/autotest_common.sh@10 -- # set +x 00:17:29.118 [2024-12-14 17:20:25.791889] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:29.118 17:20:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.118 17:20:25 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:29.118 17:20:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.118 17:20:25 -- common/autotest_common.sh@10 -- # set +x 00:17:29.377 17:20:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.377 17:20:25 -- target/delete_subsystem.sh@54 -- # perf_pid=1333928 00:17:29.377 17:20:25 -- target/delete_subsystem.sh@56 -- # delay=0 00:17:29.377 17:20:25 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:29.377 17:20:25 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:29.377 17:20:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:29.377 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.377 [2024-12-14 17:20:25.878440] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:29.636 17:20:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:29.636 17:20:26 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:29.636 17:20:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:30.205 17:20:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:30.205 17:20:26 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:30.205 17:20:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:30.774 17:20:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:30.774 17:20:27 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:30.774 17:20:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:31.343 17:20:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:31.343 17:20:27 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:31.343 17:20:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:31.912 17:20:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:31.912 17:20:28 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:31.912 17:20:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:32.171 17:20:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:32.171 17:20:28 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:32.171 17:20:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:32.739 17:20:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:32.739 17:20:29 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:32.739 17:20:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:33.307 17:20:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:33.307 17:20:29 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:33.307 17:20:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:33.876 17:20:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:33.876 17:20:30 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:33.876 17:20:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:34.443 17:20:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:34.443 17:20:30 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:34.443 17:20:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:34.702 17:20:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:34.702 17:20:31 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:34.702 17:20:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:35.271 17:20:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:35.271 17:20:31 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:35.271 17:20:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:35.839 17:20:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:35.839 17:20:32 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:35.839 17:20:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:36.408 17:20:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:36.408 17:20:32 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:36.408 17:20:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:36.408 Initializing NVMe Controllers 00:17:36.408 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:36.408 Controller IO queue size 128, less than required. 00:17:36.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:36.408 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:36.408 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:36.408 Initialization complete. Launching workers. 00:17:36.408 ======================================================== 00:17:36.408 Latency(us) 00:17:36.408 Device Information : IOPS MiB/s Average min max 00:17:36.408 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001257.97 1000053.50 1003840.48 00:17:36.408 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002336.52 1000064.40 1006106.20 00:17:36.408 ======================================================== 00:17:36.408 Total : 256.00 0.12 1001797.24 1000053.50 1006106.20 00:17:36.408 00:17:36.977 17:20:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:36.977 17:20:33 -- target/delete_subsystem.sh@57 -- # kill -0 1333928 00:17:36.977 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1333928) - No such process 00:17:36.977 17:20:33 -- target/delete_subsystem.sh@67 -- # wait 1333928 00:17:36.977 17:20:33 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:36.977 17:20:33 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:36.977 17:20:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:36.977 17:20:33 -- nvmf/common.sh@116 -- # sync 00:17:36.977 17:20:33 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:36.977 17:20:33 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:36.977 17:20:33 -- nvmf/common.sh@119 -- # set +e 00:17:36.977 17:20:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:36.977 17:20:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:36.977 rmmod nvme_rdma 00:17:36.977 rmmod nvme_fabrics 00:17:36.977 17:20:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:36.977 17:20:33 -- nvmf/common.sh@123 -- # set -e 00:17:36.977 17:20:33 -- nvmf/common.sh@124 -- # return 0 00:17:36.977 17:20:33 -- nvmf/common.sh@477 -- # '[' -n 1332981 ']' 00:17:36.977 17:20:33 -- nvmf/common.sh@478 -- # killprocess 1332981 00:17:36.977 17:20:33 -- common/autotest_common.sh@936 -- # '[' -z 1332981 ']' 00:17:36.977 17:20:33 -- common/autotest_common.sh@940 -- # kill -0 1332981 00:17:36.977 17:20:33 -- common/autotest_common.sh@941 -- # uname 00:17:36.977 17:20:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:36.977 17:20:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1332981 00:17:36.977 17:20:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:36.977 17:20:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:36.977 17:20:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1332981' 00:17:36.977 killing process with pid 1332981 00:17:36.977 17:20:33 -- common/autotest_common.sh@955 -- # kill 1332981 00:17:36.977 17:20:33 -- common/autotest_common.sh@960 -- # wait 1332981 00:17:37.236 17:20:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:37.236 17:20:33 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:37.236 00:17:37.236 real 0m20.477s 00:17:37.236 user 0m50.157s 00:17:37.236 sys 0m6.311s 00:17:37.236 17:20:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:37.236 17:20:33 -- common/autotest_common.sh@10 -- # set +x 00:17:37.236 ************************************ 00:17:37.236 END TEST nvmf_delete_subsystem 00:17:37.236 ************************************ 00:17:37.236 17:20:33 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:17:37.236 17:20:33 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:37.236 17:20:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:37.236 17:20:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:37.236 17:20:33 -- common/autotest_common.sh@10 -- # set +x 00:17:37.236 ************************************ 00:17:37.236 START TEST nvmf_nvme_cli 00:17:37.236 ************************************ 00:17:37.236 17:20:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:37.236 * Looking for test storage... 00:17:37.236 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:37.236 17:20:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:37.236 17:20:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:37.236 17:20:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:37.496 17:20:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:37.496 17:20:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:37.496 17:20:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:37.496 17:20:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:37.496 17:20:33 -- scripts/common.sh@335 -- # IFS=.-: 00:17:37.496 17:20:33 -- scripts/common.sh@335 -- # read -ra ver1 00:17:37.496 17:20:33 -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.496 17:20:33 -- scripts/common.sh@336 -- # read -ra ver2 00:17:37.496 17:20:33 -- scripts/common.sh@337 -- # local 'op=<' 00:17:37.496 17:20:33 -- scripts/common.sh@339 -- # ver1_l=2 00:17:37.496 17:20:33 -- scripts/common.sh@340 -- # ver2_l=1 00:17:37.496 17:20:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:37.496 17:20:33 -- scripts/common.sh@343 -- # case "$op" in 00:17:37.496 17:20:33 -- scripts/common.sh@344 -- # : 1 00:17:37.496 17:20:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:37.496 17:20:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.496 17:20:33 -- scripts/common.sh@364 -- # decimal 1 00:17:37.496 17:20:33 -- scripts/common.sh@352 -- # local d=1 00:17:37.496 17:20:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.496 17:20:33 -- scripts/common.sh@354 -- # echo 1 00:17:37.496 17:20:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:37.496 17:20:33 -- scripts/common.sh@365 -- # decimal 2 00:17:37.496 17:20:33 -- scripts/common.sh@352 -- # local d=2 00:17:37.496 17:20:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.496 17:20:33 -- scripts/common.sh@354 -- # echo 2 00:17:37.496 17:20:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:37.496 17:20:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:37.496 17:20:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:37.496 17:20:33 -- scripts/common.sh@367 -- # return 0 00:17:37.496 17:20:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.496 17:20:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:37.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.496 --rc genhtml_branch_coverage=1 00:17:37.496 --rc genhtml_function_coverage=1 00:17:37.496 --rc genhtml_legend=1 00:17:37.496 --rc geninfo_all_blocks=1 00:17:37.496 --rc geninfo_unexecuted_blocks=1 00:17:37.496 00:17:37.496 ' 00:17:37.496 17:20:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:37.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.496 --rc genhtml_branch_coverage=1 00:17:37.496 --rc genhtml_function_coverage=1 00:17:37.496 --rc genhtml_legend=1 00:17:37.496 --rc geninfo_all_blocks=1 00:17:37.496 --rc geninfo_unexecuted_blocks=1 00:17:37.496 00:17:37.496 ' 00:17:37.496 17:20:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:37.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.496 --rc genhtml_branch_coverage=1 00:17:37.496 --rc genhtml_function_coverage=1 00:17:37.496 --rc genhtml_legend=1 00:17:37.496 --rc geninfo_all_blocks=1 00:17:37.496 --rc geninfo_unexecuted_blocks=1 00:17:37.496 00:17:37.496 ' 00:17:37.496 17:20:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:37.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.496 --rc genhtml_branch_coverage=1 00:17:37.496 --rc genhtml_function_coverage=1 00:17:37.496 --rc genhtml_legend=1 00:17:37.496 --rc geninfo_all_blocks=1 00:17:37.496 --rc geninfo_unexecuted_blocks=1 00:17:37.496 00:17:37.496 ' 00:17:37.496 17:20:33 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.496 17:20:33 -- nvmf/common.sh@7 -- # uname -s 00:17:37.496 17:20:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.496 17:20:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.496 17:20:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.496 17:20:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.496 17:20:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.496 17:20:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.496 17:20:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.496 17:20:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.496 17:20:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.496 17:20:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.496 17:20:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:37.496 17:20:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:37.496 17:20:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.496 17:20:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.496 17:20:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.496 17:20:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:37.496 17:20:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.496 17:20:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.496 17:20:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.496 17:20:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.496 17:20:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.496 17:20:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.496 17:20:33 -- paths/export.sh@5 -- # export PATH 00:17:37.496 17:20:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.496 17:20:33 -- nvmf/common.sh@46 -- # : 0 00:17:37.496 17:20:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:37.496 17:20:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:37.496 17:20:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:37.496 17:20:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.496 17:20:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.496 17:20:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:37.496 17:20:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:37.496 17:20:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:37.496 17:20:33 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:37.496 17:20:33 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:37.496 17:20:33 -- target/nvme_cli.sh@14 -- # devs=() 00:17:37.496 17:20:33 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:37.496 17:20:33 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:37.496 17:20:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.496 17:20:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:37.496 17:20:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:37.496 17:20:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:37.496 17:20:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.496 17:20:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.496 17:20:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.496 17:20:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:37.496 17:20:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:37.496 17:20:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:37.496 17:20:34 -- common/autotest_common.sh@10 -- # set +x 00:17:44.070 17:20:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:44.070 17:20:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:44.070 17:20:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:44.070 17:20:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:44.070 17:20:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:44.070 17:20:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:44.070 17:20:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:44.070 17:20:40 -- nvmf/common.sh@294 -- # net_devs=() 00:17:44.070 17:20:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:44.070 17:20:40 -- nvmf/common.sh@295 -- # e810=() 00:17:44.070 17:20:40 -- nvmf/common.sh@295 -- # local -ga e810 00:17:44.070 17:20:40 -- nvmf/common.sh@296 -- # x722=() 00:17:44.070 17:20:40 -- nvmf/common.sh@296 -- # local -ga x722 00:17:44.070 17:20:40 -- nvmf/common.sh@297 -- # mlx=() 00:17:44.071 17:20:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:44.071 17:20:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.071 17:20:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.071 17:20:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.071 17:20:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.071 17:20:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.071 17:20:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.071 17:20:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.071 17:20:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.071 17:20:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.071 17:20:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.071 17:20:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.071 17:20:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:44.071 17:20:40 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:44.071 17:20:40 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:44.071 17:20:40 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:44.071 17:20:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:44.071 17:20:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:44.071 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:44.071 17:20:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:44.071 17:20:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:44.071 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:44.071 17:20:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:44.071 17:20:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:44.071 17:20:40 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.071 17:20:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:44.071 17:20:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.071 17:20:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:44.071 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:44.071 17:20:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.071 17:20:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.071 17:20:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:44.071 17:20:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.071 17:20:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:44.071 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:44.071 17:20:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.071 17:20:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:44.071 17:20:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:44.071 17:20:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:44.071 17:20:40 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:44.071 17:20:40 -- nvmf/common.sh@57 -- # uname 00:17:44.071 17:20:40 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:44.071 17:20:40 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:44.071 17:20:40 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:44.071 17:20:40 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:44.071 17:20:40 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:44.071 17:20:40 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:44.071 17:20:40 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:44.071 17:20:40 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:44.071 17:20:40 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:44.071 17:20:40 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:44.071 17:20:40 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:44.071 17:20:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:44.071 17:20:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:44.071 17:20:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:44.071 17:20:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:44.071 17:20:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:44.071 17:20:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:44.071 17:20:40 -- nvmf/common.sh@104 -- # continue 2 00:17:44.071 17:20:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:44.071 17:20:40 -- nvmf/common.sh@104 -- # continue 2 00:17:44.071 17:20:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:44.071 17:20:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:44.071 17:20:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:44.071 17:20:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:44.071 17:20:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:44.071 17:20:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:44.071 17:20:40 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:44.071 17:20:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:44.071 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:44.071 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:44.071 altname enp217s0f0np0 00:17:44.071 altname ens818f0np0 00:17:44.071 inet 192.168.100.8/24 scope global mlx_0_0 00:17:44.071 valid_lft forever preferred_lft forever 00:17:44.071 17:20:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:44.071 17:20:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:44.071 17:20:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:44.071 17:20:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:44.071 17:20:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:44.071 17:20:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:44.071 17:20:40 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:44.071 17:20:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:44.071 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:44.071 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:44.071 altname enp217s0f1np1 00:17:44.071 altname ens818f1np1 00:17:44.071 inet 192.168.100.9/24 scope global mlx_0_1 00:17:44.071 valid_lft forever preferred_lft forever 00:17:44.071 17:20:40 -- nvmf/common.sh@410 -- # return 0 00:17:44.071 17:20:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:44.071 17:20:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:44.071 17:20:40 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:44.071 17:20:40 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:44.071 17:20:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:44.071 17:20:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:44.071 17:20:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:44.071 17:20:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:44.071 17:20:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:44.071 17:20:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:44.071 17:20:40 -- nvmf/common.sh@104 -- # continue 2 00:17:44.071 17:20:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.071 17:20:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:44.071 17:20:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:44.071 17:20:40 -- nvmf/common.sh@104 -- # continue 2 00:17:44.071 17:20:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:44.071 17:20:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:44.071 17:20:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:44.071 17:20:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:44.071 17:20:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:44.071 17:20:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:44.071 17:20:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:44.071 17:20:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:44.071 17:20:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:44.071 17:20:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:44.072 17:20:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:44.072 17:20:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:44.072 17:20:40 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:44.072 192.168.100.9' 00:17:44.072 17:20:40 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:44.072 192.168.100.9' 00:17:44.072 17:20:40 -- nvmf/common.sh@445 -- # head -n 1 00:17:44.072 17:20:40 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:44.072 17:20:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:44.072 192.168.100.9' 00:17:44.072 17:20:40 -- nvmf/common.sh@446 -- # tail -n +2 00:17:44.072 17:20:40 -- nvmf/common.sh@446 -- # head -n 1 00:17:44.072 17:20:40 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:44.072 17:20:40 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:44.072 17:20:40 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:44.072 17:20:40 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:44.072 17:20:40 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:44.072 17:20:40 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:44.331 17:20:40 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:44.331 17:20:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:44.331 17:20:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:44.331 17:20:40 -- common/autotest_common.sh@10 -- # set +x 00:17:44.331 17:20:40 -- nvmf/common.sh@469 -- # nvmfpid=1338720 00:17:44.331 17:20:40 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:44.331 17:20:40 -- nvmf/common.sh@470 -- # waitforlisten 1338720 00:17:44.331 17:20:40 -- common/autotest_common.sh@829 -- # '[' -z 1338720 ']' 00:17:44.331 17:20:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.331 17:20:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.331 17:20:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.331 17:20:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.331 17:20:40 -- common/autotest_common.sh@10 -- # set +x 00:17:44.331 [2024-12-14 17:20:40.810233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:44.331 [2024-12-14 17:20:40.810292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.331 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.331 [2024-12-14 17:20:40.881388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.331 [2024-12-14 17:20:40.920845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:44.331 [2024-12-14 17:20:40.920964] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.331 [2024-12-14 17:20:40.920975] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.331 [2024-12-14 17:20:40.920984] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.331 [2024-12-14 17:20:40.921038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.331 [2024-12-14 17:20:40.921126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.331 [2024-12-14 17:20:40.921193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.331 [2024-12-14 17:20:40.921194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.270 17:20:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.270 17:20:41 -- common/autotest_common.sh@862 -- # return 0 00:17:45.270 17:20:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:45.270 17:20:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:45.270 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:45.270 17:20:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.270 17:20:41 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:45.270 17:20:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.270 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:45.270 [2024-12-14 17:20:41.702137] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd2c0d0/0xd305a0) succeed. 00:17:45.270 [2024-12-14 17:20:41.711319] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd2d670/0xd71c40) succeed. 00:17:45.270 17:20:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.270 17:20:41 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:45.270 17:20:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.270 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:45.270 Malloc0 00:17:45.270 17:20:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.270 17:20:41 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:45.270 17:20:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.270 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:45.270 Malloc1 00:17:45.270 17:20:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.270 17:20:41 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:45.270 17:20:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.270 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:45.270 17:20:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.270 17:20:41 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:45.270 17:20:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.270 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:45.270 17:20:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.270 17:20:41 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:45.270 17:20:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.270 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:45.270 17:20:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.270 17:20:41 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:45.270 17:20:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.270 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:45.270 [2024-12-14 17:20:41.906201] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:45.270 17:20:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.270 17:20:41 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:45.270 17:20:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.270 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:45.270 17:20:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.270 17:20:41 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:17:45.529 00:17:45.529 Discovery Log Number of Records 2, Generation counter 2 00:17:45.529 =====Discovery Log Entry 0====== 00:17:45.529 trtype: rdma 00:17:45.529 adrfam: ipv4 00:17:45.529 subtype: current discovery subsystem 00:17:45.529 treq: not required 00:17:45.529 portid: 0 00:17:45.529 trsvcid: 4420 00:17:45.529 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:45.529 traddr: 192.168.100.8 00:17:45.529 eflags: explicit discovery connections, duplicate discovery information 00:17:45.529 rdma_prtype: not specified 00:17:45.529 rdma_qptype: connected 00:17:45.529 rdma_cms: rdma-cm 00:17:45.529 rdma_pkey: 0x0000 00:17:45.529 =====Discovery Log Entry 1====== 00:17:45.529 trtype: rdma 00:17:45.529 adrfam: ipv4 00:17:45.529 subtype: nvme subsystem 00:17:45.529 treq: not required 00:17:45.529 portid: 0 00:17:45.529 trsvcid: 4420 00:17:45.529 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:45.529 traddr: 192.168.100.8 00:17:45.529 eflags: none 00:17:45.529 rdma_prtype: not specified 00:17:45.529 rdma_qptype: connected 00:17:45.529 rdma_cms: rdma-cm 00:17:45.530 rdma_pkey: 0x0000 00:17:45.530 17:20:42 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:45.530 17:20:42 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:45.530 17:20:42 -- nvmf/common.sh@510 -- # local dev _ 00:17:45.530 17:20:42 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:45.530 17:20:42 -- nvmf/common.sh@509 -- # nvme list 00:17:45.530 17:20:42 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:45.530 17:20:42 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:45.530 17:20:42 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:45.530 17:20:42 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:45.530 17:20:42 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:45.530 17:20:42 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:46.467 17:20:43 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:46.467 17:20:43 -- common/autotest_common.sh@1187 -- # local i=0 00:17:46.467 17:20:43 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:46.467 17:20:43 -- common/autotest_common.sh@1189 -- # [[ -n 2 ]] 00:17:46.467 17:20:43 -- common/autotest_common.sh@1190 -- # nvme_device_counter=2 00:17:46.467 17:20:43 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:48.373 17:20:45 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:48.373 17:20:45 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:48.373 17:20:45 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:17:48.373 17:20:45 -- common/autotest_common.sh@1196 -- # nvme_devices=2 00:17:48.373 17:20:45 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:48.373 17:20:45 -- common/autotest_common.sh@1197 -- # return 0 00:17:48.373 17:20:45 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:48.373 17:20:45 -- nvmf/common.sh@510 -- # local dev _ 00:17:48.373 17:20:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:48.373 17:20:45 -- nvmf/common.sh@509 -- # nvme list 00:17:48.373 17:20:45 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:48.373 17:20:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:48.373 17:20:45 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:48.373 17:20:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:48.373 17:20:45 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:48.373 17:20:45 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:48.373 17:20:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:48.373 17:20:45 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:48.373 17:20:45 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:48.373 17:20:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:48.373 17:20:45 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:48.373 /dev/nvme0n2 ]] 00:17:48.373 17:20:45 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:48.636 17:20:45 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:48.636 17:20:45 -- nvmf/common.sh@510 -- # local dev _ 00:17:48.636 17:20:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:48.636 17:20:45 -- nvmf/common.sh@509 -- # nvme list 00:17:48.636 17:20:45 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:48.636 17:20:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:48.636 17:20:45 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:48.636 17:20:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:48.636 17:20:45 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:48.636 17:20:45 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:48.636 17:20:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:48.636 17:20:45 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:48.636 17:20:45 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:48.636 17:20:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:48.636 17:20:45 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:48.636 17:20:45 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:49.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.575 17:20:46 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:49.575 17:20:46 -- common/autotest_common.sh@1208 -- # local i=0 00:17:49.575 17:20:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:49.575 17:20:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:49.575 17:20:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:49.575 17:20:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:49.575 17:20:46 -- common/autotest_common.sh@1220 -- # return 0 00:17:49.575 17:20:46 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:49.575 17:20:46 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:49.575 17:20:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.575 17:20:46 -- common/autotest_common.sh@10 -- # set +x 00:17:49.575 17:20:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.575 17:20:46 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:49.575 17:20:46 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:49.575 17:20:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:49.575 17:20:46 -- nvmf/common.sh@116 -- # sync 00:17:49.575 17:20:46 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:49.575 17:20:46 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:49.575 17:20:46 -- nvmf/common.sh@119 -- # set +e 00:17:49.575 17:20:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:49.575 17:20:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:49.575 rmmod nvme_rdma 00:17:49.575 rmmod nvme_fabrics 00:17:49.575 17:20:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:49.575 17:20:46 -- nvmf/common.sh@123 -- # set -e 00:17:49.575 17:20:46 -- nvmf/common.sh@124 -- # return 0 00:17:49.575 17:20:46 -- nvmf/common.sh@477 -- # '[' -n 1338720 ']' 00:17:49.575 17:20:46 -- nvmf/common.sh@478 -- # killprocess 1338720 00:17:49.575 17:20:46 -- common/autotest_common.sh@936 -- # '[' -z 1338720 ']' 00:17:49.575 17:20:46 -- common/autotest_common.sh@940 -- # kill -0 1338720 00:17:49.575 17:20:46 -- common/autotest_common.sh@941 -- # uname 00:17:49.575 17:20:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.575 17:20:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1338720 00:17:49.575 17:20:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:49.575 17:20:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:49.575 17:20:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1338720' 00:17:49.575 killing process with pid 1338720 00:17:49.575 17:20:46 -- common/autotest_common.sh@955 -- # kill 1338720 00:17:49.575 17:20:46 -- common/autotest_common.sh@960 -- # wait 1338720 00:17:50.145 17:20:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:50.145 17:20:46 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:50.145 00:17:50.145 real 0m12.761s 00:17:50.145 user 0m24.267s 00:17:50.145 sys 0m5.855s 00:17:50.145 17:20:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:50.145 17:20:46 -- common/autotest_common.sh@10 -- # set +x 00:17:50.145 ************************************ 00:17:50.145 END TEST nvmf_nvme_cli 00:17:50.145 ************************************ 00:17:50.145 17:20:46 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:17:50.145 17:20:46 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:50.145 17:20:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:50.145 17:20:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:50.145 17:20:46 -- common/autotest_common.sh@10 -- # set +x 00:17:50.145 ************************************ 00:17:50.145 START TEST nvmf_host_management 00:17:50.145 ************************************ 00:17:50.145 17:20:46 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:50.145 * Looking for test storage... 00:17:50.145 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:50.145 17:20:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:50.145 17:20:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:50.145 17:20:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:50.145 17:20:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:50.145 17:20:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:50.145 17:20:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:50.145 17:20:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:50.145 17:20:46 -- scripts/common.sh@335 -- # IFS=.-: 00:17:50.145 17:20:46 -- scripts/common.sh@335 -- # read -ra ver1 00:17:50.145 17:20:46 -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.145 17:20:46 -- scripts/common.sh@336 -- # read -ra ver2 00:17:50.145 17:20:46 -- scripts/common.sh@337 -- # local 'op=<' 00:17:50.145 17:20:46 -- scripts/common.sh@339 -- # ver1_l=2 00:17:50.145 17:20:46 -- scripts/common.sh@340 -- # ver2_l=1 00:17:50.145 17:20:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:50.145 17:20:46 -- scripts/common.sh@343 -- # case "$op" in 00:17:50.145 17:20:46 -- scripts/common.sh@344 -- # : 1 00:17:50.145 17:20:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:50.145 17:20:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.145 17:20:46 -- scripts/common.sh@364 -- # decimal 1 00:17:50.145 17:20:46 -- scripts/common.sh@352 -- # local d=1 00:17:50.145 17:20:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.145 17:20:46 -- scripts/common.sh@354 -- # echo 1 00:17:50.145 17:20:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:50.145 17:20:46 -- scripts/common.sh@365 -- # decimal 2 00:17:50.146 17:20:46 -- scripts/common.sh@352 -- # local d=2 00:17:50.146 17:20:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.146 17:20:46 -- scripts/common.sh@354 -- # echo 2 00:17:50.146 17:20:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:50.146 17:20:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:50.146 17:20:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:50.146 17:20:46 -- scripts/common.sh@367 -- # return 0 00:17:50.146 17:20:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.146 17:20:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.146 --rc genhtml_branch_coverage=1 00:17:50.146 --rc genhtml_function_coverage=1 00:17:50.146 --rc genhtml_legend=1 00:17:50.146 --rc geninfo_all_blocks=1 00:17:50.146 --rc geninfo_unexecuted_blocks=1 00:17:50.146 00:17:50.146 ' 00:17:50.146 17:20:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.146 --rc genhtml_branch_coverage=1 00:17:50.146 --rc genhtml_function_coverage=1 00:17:50.146 --rc genhtml_legend=1 00:17:50.146 --rc geninfo_all_blocks=1 00:17:50.146 --rc geninfo_unexecuted_blocks=1 00:17:50.146 00:17:50.146 ' 00:17:50.146 17:20:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.146 --rc genhtml_branch_coverage=1 00:17:50.146 --rc genhtml_function_coverage=1 00:17:50.146 --rc genhtml_legend=1 00:17:50.146 --rc geninfo_all_blocks=1 00:17:50.146 --rc geninfo_unexecuted_blocks=1 00:17:50.146 00:17:50.146 ' 00:17:50.146 17:20:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.146 --rc genhtml_branch_coverage=1 00:17:50.146 --rc genhtml_function_coverage=1 00:17:50.146 --rc genhtml_legend=1 00:17:50.146 --rc geninfo_all_blocks=1 00:17:50.146 --rc geninfo_unexecuted_blocks=1 00:17:50.146 00:17:50.146 ' 00:17:50.146 17:20:46 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.146 17:20:46 -- nvmf/common.sh@7 -- # uname -s 00:17:50.146 17:20:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.146 17:20:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.146 17:20:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.146 17:20:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.146 17:20:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.146 17:20:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.146 17:20:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.146 17:20:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.146 17:20:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.146 17:20:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.146 17:20:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:50.146 17:20:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:50.146 17:20:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.146 17:20:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.146 17:20:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.146 17:20:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:50.146 17:20:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.146 17:20:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.146 17:20:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.146 17:20:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.146 17:20:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.146 17:20:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.146 17:20:46 -- paths/export.sh@5 -- # export PATH 00:17:50.146 17:20:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.146 17:20:46 -- nvmf/common.sh@46 -- # : 0 00:17:50.146 17:20:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:50.146 17:20:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:50.146 17:20:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:50.146 17:20:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.146 17:20:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.146 17:20:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:50.146 17:20:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:50.146 17:20:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:50.146 17:20:46 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.146 17:20:46 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.146 17:20:46 -- target/host_management.sh@104 -- # nvmftestinit 00:17:50.146 17:20:46 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:50.146 17:20:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.146 17:20:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:50.146 17:20:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:50.146 17:20:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:50.146 17:20:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.146 17:20:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.146 17:20:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.146 17:20:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:50.146 17:20:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:50.146 17:20:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:50.146 17:20:46 -- common/autotest_common.sh@10 -- # set +x 00:17:56.723 17:20:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:56.723 17:20:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:56.723 17:20:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:56.723 17:20:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:56.723 17:20:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:56.723 17:20:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:56.723 17:20:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:56.723 17:20:53 -- nvmf/common.sh@294 -- # net_devs=() 00:17:56.723 17:20:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:56.723 17:20:53 -- nvmf/common.sh@295 -- # e810=() 00:17:56.723 17:20:53 -- nvmf/common.sh@295 -- # local -ga e810 00:17:56.723 17:20:53 -- nvmf/common.sh@296 -- # x722=() 00:17:56.723 17:20:53 -- nvmf/common.sh@296 -- # local -ga x722 00:17:56.723 17:20:53 -- nvmf/common.sh@297 -- # mlx=() 00:17:56.723 17:20:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:56.723 17:20:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.723 17:20:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.723 17:20:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.723 17:20:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.723 17:20:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.723 17:20:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.723 17:20:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.723 17:20:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.723 17:20:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.723 17:20:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.723 17:20:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.723 17:20:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:56.723 17:20:53 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:56.723 17:20:53 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:56.723 17:20:53 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:56.723 17:20:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:56.723 17:20:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:56.723 17:20:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:56.723 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:56.723 17:20:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:56.723 17:20:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:56.723 17:20:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:56.723 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:56.723 17:20:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:56.723 17:20:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:56.723 17:20:53 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:56.723 17:20:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.723 17:20:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:56.723 17:20:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.723 17:20:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:56.723 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:56.723 17:20:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.723 17:20:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:56.723 17:20:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.723 17:20:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:56.723 17:20:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.723 17:20:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:56.723 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:56.723 17:20:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.723 17:20:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:56.723 17:20:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:56.723 17:20:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:56.723 17:20:53 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:56.723 17:20:53 -- nvmf/common.sh@57 -- # uname 00:17:56.723 17:20:53 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:56.723 17:20:53 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:56.723 17:20:53 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:56.723 17:20:53 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:56.723 17:20:53 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:56.723 17:20:53 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:56.723 17:20:53 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:56.723 17:20:53 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:56.723 17:20:53 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:56.723 17:20:53 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:56.723 17:20:53 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:56.723 17:20:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:56.723 17:20:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:56.723 17:20:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:56.723 17:20:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:56.723 17:20:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:56.723 17:20:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:56.723 17:20:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.723 17:20:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:56.723 17:20:53 -- nvmf/common.sh@104 -- # continue 2 00:17:56.723 17:20:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:56.723 17:20:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.723 17:20:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:56.723 17:20:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.723 17:20:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:56.724 17:20:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:56.724 17:20:53 -- nvmf/common.sh@104 -- # continue 2 00:17:56.724 17:20:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:56.724 17:20:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:56.724 17:20:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:56.724 17:20:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:56.724 17:20:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:56.724 17:20:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:56.724 17:20:53 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:56.724 17:20:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:56.724 17:20:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:56.724 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:56.724 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:56.724 altname enp217s0f0np0 00:17:56.724 altname ens818f0np0 00:17:56.724 inet 192.168.100.8/24 scope global mlx_0_0 00:17:56.724 valid_lft forever preferred_lft forever 00:17:56.724 17:20:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:56.724 17:20:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:56.724 17:20:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:56.724 17:20:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:56.724 17:20:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:56.724 17:20:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:56.724 17:20:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:56.724 17:20:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:56.724 17:20:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:56.724 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:56.724 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:56.724 altname enp217s0f1np1 00:17:56.724 altname ens818f1np1 00:17:56.724 inet 192.168.100.9/24 scope global mlx_0_1 00:17:56.724 valid_lft forever preferred_lft forever 00:17:56.724 17:20:53 -- nvmf/common.sh@410 -- # return 0 00:17:56.724 17:20:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:56.724 17:20:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:56.724 17:20:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:56.724 17:20:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:56.984 17:20:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:56.984 17:20:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:56.984 17:20:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:56.984 17:20:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:56.984 17:20:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:56.984 17:20:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:56.984 17:20:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:56.984 17:20:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.984 17:20:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:56.984 17:20:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:56.984 17:20:53 -- nvmf/common.sh@104 -- # continue 2 00:17:56.984 17:20:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:56.984 17:20:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.984 17:20:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:56.985 17:20:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:56.985 17:20:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:56.985 17:20:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:56.985 17:20:53 -- nvmf/common.sh@104 -- # continue 2 00:17:56.985 17:20:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:56.985 17:20:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:56.985 17:20:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:56.985 17:20:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:56.985 17:20:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:56.985 17:20:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:56.985 17:20:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:56.985 17:20:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:56.985 17:20:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:56.985 17:20:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:56.985 17:20:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:56.985 17:20:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:56.985 17:20:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:56.985 192.168.100.9' 00:17:56.985 17:20:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:56.985 192.168.100.9' 00:17:56.985 17:20:53 -- nvmf/common.sh@445 -- # head -n 1 00:17:56.985 17:20:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:56.985 17:20:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:56.985 192.168.100.9' 00:17:56.985 17:20:53 -- nvmf/common.sh@446 -- # tail -n +2 00:17:56.985 17:20:53 -- nvmf/common.sh@446 -- # head -n 1 00:17:56.985 17:20:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:56.985 17:20:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:56.985 17:20:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:56.985 17:20:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:56.985 17:20:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:56.985 17:20:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:56.985 17:20:53 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:56.985 17:20:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:56.985 17:20:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:56.985 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:17:56.985 ************************************ 00:17:56.985 START TEST nvmf_host_management 00:17:56.985 ************************************ 00:17:56.985 17:20:53 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:17:56.985 17:20:53 -- target/host_management.sh@69 -- # starttarget 00:17:56.985 17:20:53 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:56.985 17:20:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:56.985 17:20:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:56.985 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:17:56.985 17:20:53 -- nvmf/common.sh@469 -- # nvmfpid=1343031 00:17:56.985 17:20:53 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:56.985 17:20:53 -- nvmf/common.sh@470 -- # waitforlisten 1343031 00:17:56.985 17:20:53 -- common/autotest_common.sh@829 -- # '[' -z 1343031 ']' 00:17:56.985 17:20:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.985 17:20:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.985 17:20:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.985 17:20:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.985 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:17:56.985 [2024-12-14 17:20:53.577575] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:56.985 [2024-12-14 17:20:53.577631] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.985 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.985 [2024-12-14 17:20:53.647784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.245 [2024-12-14 17:20:53.686957] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:57.245 [2024-12-14 17:20:53.687089] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.245 [2024-12-14 17:20:53.687100] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.245 [2024-12-14 17:20:53.687109] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.245 [2024-12-14 17:20:53.687228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.245 [2024-12-14 17:20:53.687299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.245 [2024-12-14 17:20:53.687690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.245 [2024-12-14 17:20:53.687690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:57.815 17:20:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.815 17:20:54 -- common/autotest_common.sh@862 -- # return 0 00:17:57.815 17:20:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:57.815 17:20:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:57.815 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:17:57.815 17:20:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.815 17:20:54 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:57.815 17:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.815 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:17:57.815 [2024-12-14 17:20:54.478503] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7413c0/0x745890) succeed. 00:17:57.815 [2024-12-14 17:20:54.487732] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x742960/0x786f30) succeed. 00:17:58.074 17:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.074 17:20:54 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:58.074 17:20:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:58.074 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:17:58.074 17:20:54 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:58.074 17:20:54 -- target/host_management.sh@23 -- # cat 00:17:58.074 17:20:54 -- target/host_management.sh@30 -- # rpc_cmd 00:17:58.074 17:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.074 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:17:58.074 Malloc0 00:17:58.074 [2024-12-14 17:20:54.665478] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:58.074 17:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.074 17:20:54 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:58.074 17:20:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.074 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:17:58.074 17:20:54 -- target/host_management.sh@73 -- # perfpid=1343282 00:17:58.074 17:20:54 -- target/host_management.sh@74 -- # waitforlisten 1343282 /var/tmp/bdevperf.sock 00:17:58.074 17:20:54 -- common/autotest_common.sh@829 -- # '[' -z 1343282 ']' 00:17:58.074 17:20:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.074 17:20:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.074 17:20:54 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:58.074 17:20:54 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:58.074 17:20:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.074 17:20:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.074 17:20:54 -- nvmf/common.sh@520 -- # config=() 00:17:58.074 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:17:58.074 17:20:54 -- nvmf/common.sh@520 -- # local subsystem config 00:17:58.074 17:20:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:58.074 17:20:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:58.074 { 00:17:58.074 "params": { 00:17:58.074 "name": "Nvme$subsystem", 00:17:58.074 "trtype": "$TEST_TRANSPORT", 00:17:58.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.074 "adrfam": "ipv4", 00:17:58.074 "trsvcid": "$NVMF_PORT", 00:17:58.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.074 "hdgst": ${hdgst:-false}, 00:17:58.074 "ddgst": ${ddgst:-false} 00:17:58.074 }, 00:17:58.074 "method": "bdev_nvme_attach_controller" 00:17:58.074 } 00:17:58.074 EOF 00:17:58.074 )") 00:17:58.074 17:20:54 -- nvmf/common.sh@542 -- # cat 00:17:58.074 17:20:54 -- nvmf/common.sh@544 -- # jq . 00:17:58.074 17:20:54 -- nvmf/common.sh@545 -- # IFS=, 00:17:58.074 17:20:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:58.074 "params": { 00:17:58.074 "name": "Nvme0", 00:17:58.074 "trtype": "rdma", 00:17:58.074 "traddr": "192.168.100.8", 00:17:58.074 "adrfam": "ipv4", 00:17:58.074 "trsvcid": "4420", 00:17:58.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:58.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:58.075 "hdgst": false, 00:17:58.075 "ddgst": false 00:17:58.075 }, 00:17:58.075 "method": "bdev_nvme_attach_controller" 00:17:58.075 }' 00:17:58.334 [2024-12-14 17:20:54.767886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:58.334 [2024-12-14 17:20:54.767939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343282 ] 00:17:58.334 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.334 [2024-12-14 17:20:54.839509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.334 [2024-12-14 17:20:54.875873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.593 Running I/O for 10 seconds... 00:17:59.161 17:20:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.161 17:20:55 -- common/autotest_common.sh@862 -- # return 0 00:17:59.161 17:20:55 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:59.161 17:20:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.161 17:20:55 -- common/autotest_common.sh@10 -- # set +x 00:17:59.161 17:20:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.161 17:20:55 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.161 17:20:55 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:59.161 17:20:55 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:59.161 17:20:55 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:59.161 17:20:55 -- target/host_management.sh@52 -- # local ret=1 00:17:59.161 17:20:55 -- target/host_management.sh@53 -- # local i 00:17:59.161 17:20:55 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:59.161 17:20:55 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:59.161 17:20:55 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:59.161 17:20:55 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:59.161 17:20:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.161 17:20:55 -- common/autotest_common.sh@10 -- # set +x 00:17:59.161 17:20:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.161 17:20:55 -- target/host_management.sh@55 -- # read_io_count=3238 00:17:59.161 17:20:55 -- target/host_management.sh@58 -- # '[' 3238 -ge 100 ']' 00:17:59.161 17:20:55 -- target/host_management.sh@59 -- # ret=0 00:17:59.161 17:20:55 -- target/host_management.sh@60 -- # break 00:17:59.161 17:20:55 -- target/host_management.sh@64 -- # return 0 00:17:59.161 17:20:55 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:59.161 17:20:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.161 17:20:55 -- common/autotest_common.sh@10 -- # set +x 00:17:59.161 17:20:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.161 17:20:55 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:59.161 17:20:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.161 17:20:55 -- common/autotest_common.sh@10 -- # set +x 00:17:59.161 17:20:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.162 17:20:55 -- target/host_management.sh@87 -- # sleep 1 00:18:00.101 [2024-12-14 17:20:56.670502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182400 00:18:00.101 [2024-12-14 17:20:56.670538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.101 [2024-12-14 17:20:56.670557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182700 00:18:00.101 [2024-12-14 17:20:56.670567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.101 [2024-12-14 17:20:56.670578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182600 00:18:00.101 [2024-12-14 17:20:56.670588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.101 [2024-12-14 17:20:56.670599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182600 00:18:00.101 [2024-12-14 17:20:56.670608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.101 [2024-12-14 17:20:56.670618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182600 00:18:00.101 [2024-12-14 17:20:56.670627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.101 [2024-12-14 17:20:56.670638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182600 00:18:00.101 [2024-12-14 17:20:56.670647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.101 [2024-12-14 17:20:56.670658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182000 00:18:00.101 [2024-12-14 17:20:56.670666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.101 [2024-12-14 17:20:56.670677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182500 00:18:00.101 [2024-12-14 17:20:56.670686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.101 [2024-12-14 17:20:56.670697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182500 00:18:00.101 [2024-12-14 17:20:56.670710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.101 [2024-12-14 17:20:56.670721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182700 00:18:00.101 [2024-12-14 17:20:56.670730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.101 [2024-12-14 17:20:56.670741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182700 00:18:00.101 [2024-12-14 17:20:56.670750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.101 [2024-12-14 17:20:56.670760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182500 00:18:00.101 [2024-12-14 17:20:56.670769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.101 [2024-12-14 17:20:56.670780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182700 00:18:00.102 [2024-12-14 17:20:56.670788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.670799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182400 00:18:00.102 [2024-12-14 17:20:56.670808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.670819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182000 00:18:00.102 [2024-12-14 17:20:56.670828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.670838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182400 00:18:00.102 [2024-12-14 17:20:56.670847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.670858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182700 00:18:00.102 [2024-12-14 17:20:56.670867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.670877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182500 00:18:00.102 [2024-12-14 17:20:56.670887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.670897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182500 00:18:00.102 [2024-12-14 17:20:56.670907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.670917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182400 00:18:00.102 [2024-12-14 17:20:56.670928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.670939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182700 00:18:00.102 [2024-12-14 17:20:56.670948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.670959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182400 00:18:00.102 [2024-12-14 17:20:56.670968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.670979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182600 00:18:00.102 [2024-12-14 17:20:56.670988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.670998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182500 00:18:00.102 [2024-12-14 17:20:56.671007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182000 00:18:00.102 [2024-12-14 17:20:56.671026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182500 00:18:00.102 [2024-12-14 17:20:56.671046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182600 00:18:00.102 [2024-12-14 17:20:56.671065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182700 00:18:00.102 [2024-12-14 17:20:56.671084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182400 00:18:00.102 [2024-12-14 17:20:56.671103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182500 00:18:00.102 [2024-12-14 17:20:56.671123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182500 00:18:00.102 [2024-12-14 17:20:56.671146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182700 00:18:00.102 [2024-12-14 17:20:56.671166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182400 00:18:00.102 [2024-12-14 17:20:56.671186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182400 00:18:00.102 [2024-12-14 17:20:56.671205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182600 00:18:00.102 [2024-12-14 17:20:56.671225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182700 00:18:00.102 [2024-12-14 17:20:56.671244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182500 00:18:00.102 [2024-12-14 17:20:56.671264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182000 00:18:00.102 [2024-12-14 17:20:56.671283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182600 00:18:00.102 [2024-12-14 17:20:56.671303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182400 00:18:00.102 [2024-12-14 17:20:56.671322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182600 00:18:00.102 [2024-12-14 17:20:56.671342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182500 00:18:00.102 [2024-12-14 17:20:56.671361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182400 00:18:00.102 [2024-12-14 17:20:56.671381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182600 00:18:00.102 [2024-12-14 17:20:56.671401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182600 00:18:00.102 [2024-12-14 17:20:56.671421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182400 00:18:00.102 [2024-12-14 17:20:56.671441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.102 [2024-12-14 17:20:56.671451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182400 00:18:00.103 [2024-12-14 17:20:56.671460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182400 00:18:00.103 [2024-12-14 17:20:56.671480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182700 00:18:00.103 [2024-12-14 17:20:56.671504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182600 00:18:00.103 [2024-12-14 17:20:56.671523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:18:00.103 [2024-12-14 17:20:56.671542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182500 00:18:00.103 [2024-12-14 17:20:56.671562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182500 00:18:00.103 [2024-12-14 17:20:56.671581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182600 00:18:00.103 [2024-12-14 17:20:56.671602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd86000 len:0x10000 key:0x182300 00:18:00.103 [2024-12-14 17:20:56.671622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cda7000 len:0x10000 key:0x182300 00:18:00.103 [2024-12-14 17:20:56.671641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cdc8000 len:0x10000 key:0x182300 00:18:00.103 [2024-12-14 17:20:56.671661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cde9000 len:0x10000 key:0x182300 00:18:00.103 [2024-12-14 17:20:56.671680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce0a000 len:0x10000 key:0x182300 00:18:00.103 [2024-12-14 17:20:56.671700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce2b000 len:0x10000 key:0x182300 00:18:00.103 [2024-12-14 17:20:56.671720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce4c000 len:0x10000 key:0x182300 00:18:00.103 [2024-12-14 17:20:56.671739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce6d000 len:0x10000 key:0x182300 00:18:00.103 [2024-12-14 17:20:56.671758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x182300 00:18:00.103 [2024-12-14 17:20:56.671778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.671789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x182300 00:18:00.103 [2024-12-14 17:20:56.671798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64587 cdw0:4402a000 sqhd:8cd4 p:1 m:0 dnr:0 00:18:00.103 [2024-12-14 17:20:56.673795] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:18:00.103 [2024-12-14 17:20:56.674674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:00.103 task offset: 48640 on job bdev=Nvme0n1 fails 00:18:00.103 00:18:00.103 Latency(us) 00:18:00.103 [2024-12-14T16:20:56.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.103 [2024-12-14T16:20:56.787Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:00.103 [2024-12-14T16:20:56.787Z] Job: Nvme0n1 ended in about 1.62 seconds with error 00:18:00.103 Verification LBA range: start 0x0 length 0x400 00:18:00.103 Nvme0n1 : 1.62 2121.37 132.59 39.41 0.00 29437.96 3171.94 1020054.73 00:18:00.103 [2024-12-14T16:20:56.787Z] =================================================================================================================== 00:18:00.103 [2024-12-14T16:20:56.787Z] Total : 2121.37 132.59 39.41 0.00 29437.96 3171.94 1020054.73 00:18:00.103 [2024-12-14 17:20:56.676267] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:00.103 17:20:56 -- target/host_management.sh@91 -- # kill -9 1343282 00:18:00.103 17:20:56 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:18:00.103 17:20:56 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:00.103 17:20:56 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:18:00.103 17:20:56 -- nvmf/common.sh@520 -- # config=() 00:18:00.103 17:20:56 -- nvmf/common.sh@520 -- # local subsystem config 00:18:00.103 17:20:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:00.103 17:20:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:00.103 { 00:18:00.103 "params": { 00:18:00.103 "name": "Nvme$subsystem", 00:18:00.103 "trtype": "$TEST_TRANSPORT", 00:18:00.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:00.103 "adrfam": "ipv4", 00:18:00.103 "trsvcid": "$NVMF_PORT", 00:18:00.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:00.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:00.103 "hdgst": ${hdgst:-false}, 00:18:00.103 "ddgst": ${ddgst:-false} 00:18:00.103 }, 00:18:00.103 "method": "bdev_nvme_attach_controller" 00:18:00.103 } 00:18:00.103 EOF 00:18:00.103 )") 00:18:00.103 17:20:56 -- nvmf/common.sh@542 -- # cat 00:18:00.103 17:20:56 -- nvmf/common.sh@544 -- # jq . 00:18:00.103 17:20:56 -- nvmf/common.sh@545 -- # IFS=, 00:18:00.103 17:20:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:00.103 "params": { 00:18:00.103 "name": "Nvme0", 00:18:00.103 "trtype": "rdma", 00:18:00.103 "traddr": "192.168.100.8", 00:18:00.103 "adrfam": "ipv4", 00:18:00.103 "trsvcid": "4420", 00:18:00.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:00.103 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:00.103 "hdgst": false, 00:18:00.103 "ddgst": false 00:18:00.103 }, 00:18:00.103 "method": "bdev_nvme_attach_controller" 00:18:00.103 }' 00:18:00.103 [2024-12-14 17:20:56.730657] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:00.103 [2024-12-14 17:20:56.730709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343629 ] 00:18:00.103 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.363 [2024-12-14 17:20:56.801482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.363 [2024-12-14 17:20:56.838317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.363 Running I/O for 1 seconds... 00:18:01.744 00:18:01.744 Latency(us) 00:18:01.744 [2024-12-14T16:20:58.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.744 [2024-12-14T16:20:58.428Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:01.744 Verification LBA range: start 0x0 length 0x400 00:18:01.744 Nvme0n1 : 1.01 5601.77 350.11 0.00 0.00 11251.83 560.33 24536.68 00:18:01.744 [2024-12-14T16:20:58.428Z] =================================================================================================================== 00:18:01.744 [2024-12-14T16:20:58.428Z] Total : 5601.77 350.11 0.00 0.00 11251.83 560.33 24536.68 00:18:01.744 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1343282 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:18:01.744 17:20:58 -- target/host_management.sh@101 -- # stoptarget 00:18:01.744 17:20:58 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:18:01.744 17:20:58 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:01.744 17:20:58 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:01.744 17:20:58 -- target/host_management.sh@40 -- # nvmftestfini 00:18:01.744 17:20:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:01.744 17:20:58 -- nvmf/common.sh@116 -- # sync 00:18:01.744 17:20:58 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:01.744 17:20:58 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:01.744 17:20:58 -- nvmf/common.sh@119 -- # set +e 00:18:01.744 17:20:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:01.744 17:20:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:01.744 rmmod nvme_rdma 00:18:01.744 rmmod nvme_fabrics 00:18:01.744 17:20:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:01.744 17:20:58 -- nvmf/common.sh@123 -- # set -e 00:18:01.744 17:20:58 -- nvmf/common.sh@124 -- # return 0 00:18:01.744 17:20:58 -- nvmf/common.sh@477 -- # '[' -n 1343031 ']' 00:18:01.744 17:20:58 -- nvmf/common.sh@478 -- # killprocess 1343031 00:18:01.744 17:20:58 -- common/autotest_common.sh@936 -- # '[' -z 1343031 ']' 00:18:01.744 17:20:58 -- common/autotest_common.sh@940 -- # kill -0 1343031 00:18:01.744 17:20:58 -- common/autotest_common.sh@941 -- # uname 00:18:01.744 17:20:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.744 17:20:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1343031 00:18:01.744 17:20:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:01.744 17:20:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:01.744 17:20:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1343031' 00:18:01.744 killing process with pid 1343031 00:18:01.744 17:20:58 -- common/autotest_common.sh@955 -- # kill 1343031 00:18:01.744 17:20:58 -- common/autotest_common.sh@960 -- # wait 1343031 00:18:02.004 [2024-12-14 17:20:58.576967] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:18:02.004 17:20:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:02.004 17:20:58 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:02.004 00:18:02.004 real 0m5.076s 00:18:02.004 user 0m22.838s 00:18:02.004 sys 0m1.028s 00:18:02.004 17:20:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:02.004 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:18:02.004 ************************************ 00:18:02.004 END TEST nvmf_host_management 00:18:02.004 ************************************ 00:18:02.004 17:20:58 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:02.004 00:18:02.004 real 0m12.053s 00:18:02.004 user 0m24.800s 00:18:02.004 sys 0m6.272s 00:18:02.004 17:20:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:02.004 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:18:02.004 ************************************ 00:18:02.004 END TEST nvmf_host_management 00:18:02.004 ************************************ 00:18:02.265 17:20:58 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:18:02.265 17:20:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:02.265 17:20:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:02.265 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:18:02.265 ************************************ 00:18:02.265 START TEST nvmf_lvol 00:18:02.265 ************************************ 00:18:02.265 17:20:58 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:18:02.265 * Looking for test storage... 00:18:02.265 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:02.265 17:20:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:02.265 17:20:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:02.265 17:20:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:02.265 17:20:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:02.265 17:20:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:02.265 17:20:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:02.265 17:20:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:02.265 17:20:58 -- scripts/common.sh@335 -- # IFS=.-: 00:18:02.265 17:20:58 -- scripts/common.sh@335 -- # read -ra ver1 00:18:02.265 17:20:58 -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.265 17:20:58 -- scripts/common.sh@336 -- # read -ra ver2 00:18:02.265 17:20:58 -- scripts/common.sh@337 -- # local 'op=<' 00:18:02.265 17:20:58 -- scripts/common.sh@339 -- # ver1_l=2 00:18:02.265 17:20:58 -- scripts/common.sh@340 -- # ver2_l=1 00:18:02.265 17:20:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:02.265 17:20:58 -- scripts/common.sh@343 -- # case "$op" in 00:18:02.265 17:20:58 -- scripts/common.sh@344 -- # : 1 00:18:02.265 17:20:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:02.265 17:20:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.265 17:20:58 -- scripts/common.sh@364 -- # decimal 1 00:18:02.265 17:20:58 -- scripts/common.sh@352 -- # local d=1 00:18:02.265 17:20:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.265 17:20:58 -- scripts/common.sh@354 -- # echo 1 00:18:02.265 17:20:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:02.265 17:20:58 -- scripts/common.sh@365 -- # decimal 2 00:18:02.265 17:20:58 -- scripts/common.sh@352 -- # local d=2 00:18:02.265 17:20:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.265 17:20:58 -- scripts/common.sh@354 -- # echo 2 00:18:02.265 17:20:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:02.265 17:20:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:02.265 17:20:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:02.265 17:20:58 -- scripts/common.sh@367 -- # return 0 00:18:02.265 17:20:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.265 17:20:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:02.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.265 --rc genhtml_branch_coverage=1 00:18:02.265 --rc genhtml_function_coverage=1 00:18:02.265 --rc genhtml_legend=1 00:18:02.265 --rc geninfo_all_blocks=1 00:18:02.265 --rc geninfo_unexecuted_blocks=1 00:18:02.265 00:18:02.265 ' 00:18:02.265 17:20:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:02.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.265 --rc genhtml_branch_coverage=1 00:18:02.265 --rc genhtml_function_coverage=1 00:18:02.265 --rc genhtml_legend=1 00:18:02.265 --rc geninfo_all_blocks=1 00:18:02.265 --rc geninfo_unexecuted_blocks=1 00:18:02.265 00:18:02.265 ' 00:18:02.265 17:20:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:02.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.265 --rc genhtml_branch_coverage=1 00:18:02.265 --rc genhtml_function_coverage=1 00:18:02.265 --rc genhtml_legend=1 00:18:02.265 --rc geninfo_all_blocks=1 00:18:02.265 --rc geninfo_unexecuted_blocks=1 00:18:02.265 00:18:02.265 ' 00:18:02.265 17:20:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:02.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.265 --rc genhtml_branch_coverage=1 00:18:02.265 --rc genhtml_function_coverage=1 00:18:02.265 --rc genhtml_legend=1 00:18:02.265 --rc geninfo_all_blocks=1 00:18:02.265 --rc geninfo_unexecuted_blocks=1 00:18:02.265 00:18:02.265 ' 00:18:02.265 17:20:58 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.265 17:20:58 -- nvmf/common.sh@7 -- # uname -s 00:18:02.265 17:20:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.265 17:20:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.265 17:20:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.265 17:20:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.265 17:20:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.265 17:20:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.265 17:20:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.265 17:20:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.265 17:20:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.265 17:20:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.265 17:20:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:02.265 17:20:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:02.265 17:20:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.265 17:20:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.265 17:20:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.265 17:20:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:02.265 17:20:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.265 17:20:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.265 17:20:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.265 17:20:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.265 17:20:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.265 17:20:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.265 17:20:58 -- paths/export.sh@5 -- # export PATH 00:18:02.266 17:20:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.266 17:20:58 -- nvmf/common.sh@46 -- # : 0 00:18:02.266 17:20:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:02.266 17:20:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:02.266 17:20:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:02.266 17:20:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.266 17:20:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.266 17:20:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:02.266 17:20:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:02.266 17:20:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:02.266 17:20:58 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.266 17:20:58 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:02.266 17:20:58 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:18:02.266 17:20:58 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:18:02.266 17:20:58 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:02.266 17:20:58 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:18:02.266 17:20:58 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:02.266 17:20:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.266 17:20:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:02.266 17:20:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:02.266 17:20:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:02.266 17:20:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.266 17:20:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.266 17:20:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.266 17:20:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:02.266 17:20:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:02.266 17:20:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:02.266 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:18:08.842 17:21:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:08.842 17:21:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:08.842 17:21:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:08.842 17:21:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:08.842 17:21:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:08.842 17:21:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:08.842 17:21:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:08.842 17:21:05 -- nvmf/common.sh@294 -- # net_devs=() 00:18:08.842 17:21:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:08.842 17:21:05 -- nvmf/common.sh@295 -- # e810=() 00:18:08.842 17:21:05 -- nvmf/common.sh@295 -- # local -ga e810 00:18:08.842 17:21:05 -- nvmf/common.sh@296 -- # x722=() 00:18:08.842 17:21:05 -- nvmf/common.sh@296 -- # local -ga x722 00:18:08.842 17:21:05 -- nvmf/common.sh@297 -- # mlx=() 00:18:08.842 17:21:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:08.842 17:21:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:08.842 17:21:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:08.842 17:21:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:08.842 17:21:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:08.842 17:21:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:08.842 17:21:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:08.842 17:21:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:08.842 17:21:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:08.842 17:21:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:08.842 17:21:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:08.842 17:21:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:08.842 17:21:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:08.842 17:21:05 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:08.842 17:21:05 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:08.842 17:21:05 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:08.842 17:21:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:08.842 17:21:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:08.842 17:21:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:08.842 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:08.842 17:21:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:08.842 17:21:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:08.842 17:21:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:08.842 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:08.842 17:21:05 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:08.842 17:21:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:08.842 17:21:05 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:08.842 17:21:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:08.842 17:21:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.842 17:21:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:08.843 17:21:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.843 17:21:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:08.843 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:08.843 17:21:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.843 17:21:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:08.843 17:21:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.843 17:21:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:08.843 17:21:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.843 17:21:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:08.843 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:08.843 17:21:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.843 17:21:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:08.843 17:21:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:08.843 17:21:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:08.843 17:21:05 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:08.843 17:21:05 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:08.843 17:21:05 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:08.843 17:21:05 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:08.843 17:21:05 -- nvmf/common.sh@57 -- # uname 00:18:08.843 17:21:05 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:08.843 17:21:05 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:08.843 17:21:05 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:08.843 17:21:05 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:08.843 17:21:05 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:08.843 17:21:05 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:08.843 17:21:05 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:08.843 17:21:05 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:08.843 17:21:05 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:08.843 17:21:05 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:08.843 17:21:05 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:08.843 17:21:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:08.843 17:21:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:08.843 17:21:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:08.843 17:21:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:08.843 17:21:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:08.843 17:21:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:08.843 17:21:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:08.843 17:21:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:08.843 17:21:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:08.843 17:21:05 -- nvmf/common.sh@104 -- # continue 2 00:18:08.843 17:21:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:08.843 17:21:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:08.843 17:21:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:08.843 17:21:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:08.843 17:21:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:08.843 17:21:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:08.843 17:21:05 -- nvmf/common.sh@104 -- # continue 2 00:18:08.843 17:21:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:08.843 17:21:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:08.843 17:21:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:08.843 17:21:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:08.843 17:21:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:08.843 17:21:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:08.843 17:21:05 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:08.843 17:21:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:08.843 17:21:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:08.843 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:08.843 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:08.843 altname enp217s0f0np0 00:18:08.843 altname ens818f0np0 00:18:08.843 inet 192.168.100.8/24 scope global mlx_0_0 00:18:08.843 valid_lft forever preferred_lft forever 00:18:08.843 17:21:05 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:08.843 17:21:05 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:08.843 17:21:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:08.843 17:21:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:08.843 17:21:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:08.843 17:21:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:08.843 17:21:05 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:08.843 17:21:05 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:08.843 17:21:05 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:08.843 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:08.843 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:08.843 altname enp217s0f1np1 00:18:08.843 altname ens818f1np1 00:18:08.843 inet 192.168.100.9/24 scope global mlx_0_1 00:18:08.843 valid_lft forever preferred_lft forever 00:18:08.843 17:21:05 -- nvmf/common.sh@410 -- # return 0 00:18:08.843 17:21:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:08.843 17:21:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:08.843 17:21:05 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:08.843 17:21:05 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:08.843 17:21:05 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:08.843 17:21:05 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:08.843 17:21:05 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:08.843 17:21:05 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:08.843 17:21:05 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:08.843 17:21:05 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:08.843 17:21:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:08.843 17:21:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:08.843 17:21:05 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:08.843 17:21:05 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:08.843 17:21:05 -- nvmf/common.sh@104 -- # continue 2 00:18:08.843 17:21:05 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:08.843 17:21:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:08.843 17:21:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:08.843 17:21:05 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:08.843 17:21:05 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:08.843 17:21:05 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:08.843 17:21:05 -- nvmf/common.sh@104 -- # continue 2 00:18:08.843 17:21:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:08.843 17:21:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:08.843 17:21:05 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:08.843 17:21:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:08.843 17:21:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:08.843 17:21:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:08.843 17:21:05 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:08.843 17:21:05 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:08.843 17:21:05 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:08.843 17:21:05 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:08.843 17:21:05 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:08.843 17:21:05 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:08.843 17:21:05 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:08.843 192.168.100.9' 00:18:08.843 17:21:05 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:08.843 192.168.100.9' 00:18:08.843 17:21:05 -- nvmf/common.sh@445 -- # head -n 1 00:18:08.843 17:21:05 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:08.843 17:21:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:08.843 192.168.100.9' 00:18:08.843 17:21:05 -- nvmf/common.sh@446 -- # tail -n +2 00:18:08.843 17:21:05 -- nvmf/common.sh@446 -- # head -n 1 00:18:08.843 17:21:05 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:08.843 17:21:05 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:08.843 17:21:05 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:08.843 17:21:05 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:08.843 17:21:05 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:08.843 17:21:05 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:08.843 17:21:05 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:08.843 17:21:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:08.843 17:21:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:08.843 17:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:08.843 17:21:05 -- nvmf/common.sh@469 -- # nvmfpid=1347719 00:18:08.843 17:21:05 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:08.843 17:21:05 -- nvmf/common.sh@470 -- # waitforlisten 1347719 00:18:09.103 17:21:05 -- common/autotest_common.sh@829 -- # '[' -z 1347719 ']' 00:18:09.103 17:21:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.103 17:21:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.103 17:21:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.103 17:21:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.103 17:21:05 -- common/autotest_common.sh@10 -- # set +x 00:18:09.103 [2024-12-14 17:21:05.569464] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:09.103 [2024-12-14 17:21:05.569521] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.103 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.103 [2024-12-14 17:21:05.641064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:09.103 [2024-12-14 17:21:05.678231] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:09.103 [2024-12-14 17:21:05.678345] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.103 [2024-12-14 17:21:05.678355] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.103 [2024-12-14 17:21:05.678365] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.103 [2024-12-14 17:21:05.678419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.103 [2024-12-14 17:21:05.678527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.103 [2024-12-14 17:21:05.678530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.724 17:21:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.725 17:21:06 -- common/autotest_common.sh@862 -- # return 0 00:18:09.725 17:21:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:09.725 17:21:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:09.725 17:21:06 -- common/autotest_common.sh@10 -- # set +x 00:18:09.984 17:21:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.984 17:21:06 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:09.984 [2024-12-14 17:21:06.602760] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1827600/0x182bab0) succeed. 00:18:09.984 [2024-12-14 17:21:06.611880] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1828b00/0x186d150) succeed. 00:18:10.244 17:21:06 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:10.244 17:21:06 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:10.244 17:21:06 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:10.504 17:21:07 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:10.504 17:21:07 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:10.763 17:21:07 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:11.023 17:21:07 -- target/nvmf_lvol.sh@29 -- # lvs=52591490-e2bb-44ea-bbf2-97eb2630573a 00:18:11.023 17:21:07 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 52591490-e2bb-44ea-bbf2-97eb2630573a lvol 20 00:18:11.023 17:21:07 -- target/nvmf_lvol.sh@32 -- # lvol=f3f169a5-6f25-47cb-8942-04ac23e80be9 00:18:11.023 17:21:07 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:11.282 17:21:07 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f3f169a5-6f25-47cb-8942-04ac23e80be9 00:18:11.542 17:21:08 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:11.542 [2024-12-14 17:21:08.225489] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:11.802 17:21:08 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:11.802 17:21:08 -- target/nvmf_lvol.sh@42 -- # perf_pid=1348231 00:18:11.802 17:21:08 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:11.802 17:21:08 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:11.802 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.182 17:21:09 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f3f169a5-6f25-47cb-8942-04ac23e80be9 MY_SNAPSHOT 00:18:13.182 17:21:09 -- target/nvmf_lvol.sh@47 -- # snapshot=8fdfd0b2-69a2-46c3-bc0f-3c5efede3045 00:18:13.182 17:21:09 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f3f169a5-6f25-47cb-8942-04ac23e80be9 30 00:18:13.182 17:21:09 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8fdfd0b2-69a2-46c3-bc0f-3c5efede3045 MY_CLONE 00:18:13.441 17:21:09 -- target/nvmf_lvol.sh@49 -- # clone=dfeac43a-bbdc-453b-bf5f-723c0ce4b2cb 00:18:13.441 17:21:09 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate dfeac43a-bbdc-453b-bf5f-723c0ce4b2cb 00:18:13.701 17:21:10 -- target/nvmf_lvol.sh@53 -- # wait 1348231 00:18:23.689 Initializing NVMe Controllers 00:18:23.689 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:18:23.689 Controller IO queue size 128, less than required. 00:18:23.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:23.689 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:23.689 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:23.689 Initialization complete. Launching workers. 00:18:23.689 ======================================================== 00:18:23.689 Latency(us) 00:18:23.689 Device Information : IOPS MiB/s Average min max 00:18:23.689 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16565.90 64.71 7728.99 1991.67 43903.31 00:18:23.689 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16516.50 64.52 7751.66 3668.71 47713.97 00:18:23.689 ======================================================== 00:18:23.689 Total : 33082.39 129.23 7740.31 1991.67 47713.97 00:18:23.689 00:18:23.689 17:21:19 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:23.689 17:21:20 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f3f169a5-6f25-47cb-8942-04ac23e80be9 00:18:23.689 17:21:20 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52591490-e2bb-44ea-bbf2-97eb2630573a 00:18:23.949 17:21:20 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:23.949 17:21:20 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:23.949 17:21:20 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:23.949 17:21:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:23.949 17:21:20 -- nvmf/common.sh@116 -- # sync 00:18:23.949 17:21:20 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:23.949 17:21:20 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:23.949 17:21:20 -- nvmf/common.sh@119 -- # set +e 00:18:23.949 17:21:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:23.949 17:21:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:23.949 rmmod nvme_rdma 00:18:23.949 rmmod nvme_fabrics 00:18:23.949 17:21:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:23.949 17:21:20 -- nvmf/common.sh@123 -- # set -e 00:18:23.949 17:21:20 -- nvmf/common.sh@124 -- # return 0 00:18:23.949 17:21:20 -- nvmf/common.sh@477 -- # '[' -n 1347719 ']' 00:18:23.949 17:21:20 -- nvmf/common.sh@478 -- # killprocess 1347719 00:18:23.949 17:21:20 -- common/autotest_common.sh@936 -- # '[' -z 1347719 ']' 00:18:23.949 17:21:20 -- common/autotest_common.sh@940 -- # kill -0 1347719 00:18:23.949 17:21:20 -- common/autotest_common.sh@941 -- # uname 00:18:23.949 17:21:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:23.949 17:21:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1347719 00:18:23.949 17:21:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:23.949 17:21:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:23.949 17:21:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1347719' 00:18:23.949 killing process with pid 1347719 00:18:23.949 17:21:20 -- common/autotest_common.sh@955 -- # kill 1347719 00:18:23.949 17:21:20 -- common/autotest_common.sh@960 -- # wait 1347719 00:18:24.209 17:21:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:24.209 17:21:20 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:24.209 00:18:24.209 real 0m22.093s 00:18:24.209 user 1m11.603s 00:18:24.209 sys 0m6.405s 00:18:24.209 17:21:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:24.209 17:21:20 -- common/autotest_common.sh@10 -- # set +x 00:18:24.209 ************************************ 00:18:24.209 END TEST nvmf_lvol 00:18:24.209 ************************************ 00:18:24.209 17:21:20 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:24.209 17:21:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:24.209 17:21:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:24.209 17:21:20 -- common/autotest_common.sh@10 -- # set +x 00:18:24.209 ************************************ 00:18:24.209 START TEST nvmf_lvs_grow 00:18:24.209 ************************************ 00:18:24.209 17:21:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:24.469 * Looking for test storage... 00:18:24.469 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:24.469 17:21:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:24.469 17:21:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:24.469 17:21:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:24.469 17:21:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:24.469 17:21:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:24.469 17:21:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:24.469 17:21:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:24.469 17:21:21 -- scripts/common.sh@335 -- # IFS=.-: 00:18:24.469 17:21:21 -- scripts/common.sh@335 -- # read -ra ver1 00:18:24.469 17:21:21 -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.469 17:21:21 -- scripts/common.sh@336 -- # read -ra ver2 00:18:24.469 17:21:21 -- scripts/common.sh@337 -- # local 'op=<' 00:18:24.469 17:21:21 -- scripts/common.sh@339 -- # ver1_l=2 00:18:24.469 17:21:21 -- scripts/common.sh@340 -- # ver2_l=1 00:18:24.469 17:21:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:24.469 17:21:21 -- scripts/common.sh@343 -- # case "$op" in 00:18:24.469 17:21:21 -- scripts/common.sh@344 -- # : 1 00:18:24.469 17:21:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:24.469 17:21:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.469 17:21:21 -- scripts/common.sh@364 -- # decimal 1 00:18:24.469 17:21:21 -- scripts/common.sh@352 -- # local d=1 00:18:24.469 17:21:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.469 17:21:21 -- scripts/common.sh@354 -- # echo 1 00:18:24.469 17:21:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:24.469 17:21:21 -- scripts/common.sh@365 -- # decimal 2 00:18:24.469 17:21:21 -- scripts/common.sh@352 -- # local d=2 00:18:24.469 17:21:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.469 17:21:21 -- scripts/common.sh@354 -- # echo 2 00:18:24.469 17:21:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:24.469 17:21:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:24.469 17:21:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:24.469 17:21:21 -- scripts/common.sh@367 -- # return 0 00:18:24.469 17:21:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.469 17:21:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:24.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.469 --rc genhtml_branch_coverage=1 00:18:24.469 --rc genhtml_function_coverage=1 00:18:24.469 --rc genhtml_legend=1 00:18:24.469 --rc geninfo_all_blocks=1 00:18:24.469 --rc geninfo_unexecuted_blocks=1 00:18:24.469 00:18:24.469 ' 00:18:24.469 17:21:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:24.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.469 --rc genhtml_branch_coverage=1 00:18:24.469 --rc genhtml_function_coverage=1 00:18:24.469 --rc genhtml_legend=1 00:18:24.469 --rc geninfo_all_blocks=1 00:18:24.469 --rc geninfo_unexecuted_blocks=1 00:18:24.469 00:18:24.469 ' 00:18:24.469 17:21:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:24.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.469 --rc genhtml_branch_coverage=1 00:18:24.469 --rc genhtml_function_coverage=1 00:18:24.469 --rc genhtml_legend=1 00:18:24.469 --rc geninfo_all_blocks=1 00:18:24.469 --rc geninfo_unexecuted_blocks=1 00:18:24.469 00:18:24.469 ' 00:18:24.469 17:21:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:24.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.469 --rc genhtml_branch_coverage=1 00:18:24.469 --rc genhtml_function_coverage=1 00:18:24.469 --rc genhtml_legend=1 00:18:24.469 --rc geninfo_all_blocks=1 00:18:24.469 --rc geninfo_unexecuted_blocks=1 00:18:24.469 00:18:24.469 ' 00:18:24.469 17:21:21 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.469 17:21:21 -- nvmf/common.sh@7 -- # uname -s 00:18:24.469 17:21:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.469 17:21:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.469 17:21:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.469 17:21:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.469 17:21:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.469 17:21:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.469 17:21:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.469 17:21:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.469 17:21:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.469 17:21:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.469 17:21:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:24.469 17:21:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:24.469 17:21:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.469 17:21:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.469 17:21:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.469 17:21:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:24.469 17:21:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.469 17:21:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.469 17:21:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.469 17:21:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.469 17:21:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.469 17:21:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.469 17:21:21 -- paths/export.sh@5 -- # export PATH 00:18:24.469 17:21:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.469 17:21:21 -- nvmf/common.sh@46 -- # : 0 00:18:24.469 17:21:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:24.469 17:21:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:24.469 17:21:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:24.469 17:21:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.469 17:21:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.470 17:21:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:24.470 17:21:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:24.470 17:21:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:24.470 17:21:21 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:24.470 17:21:21 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.470 17:21:21 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:24.470 17:21:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:24.470 17:21:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.470 17:21:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:24.470 17:21:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:24.470 17:21:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:24.470 17:21:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.470 17:21:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.470 17:21:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.470 17:21:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:24.470 17:21:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:24.470 17:21:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:24.470 17:21:21 -- common/autotest_common.sh@10 -- # set +x 00:18:31.045 17:21:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:31.045 17:21:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:31.045 17:21:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:31.045 17:21:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:31.045 17:21:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:31.045 17:21:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:31.045 17:21:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:31.045 17:21:27 -- nvmf/common.sh@294 -- # net_devs=() 00:18:31.045 17:21:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:31.045 17:21:27 -- nvmf/common.sh@295 -- # e810=() 00:18:31.045 17:21:27 -- nvmf/common.sh@295 -- # local -ga e810 00:18:31.045 17:21:27 -- nvmf/common.sh@296 -- # x722=() 00:18:31.045 17:21:27 -- nvmf/common.sh@296 -- # local -ga x722 00:18:31.045 17:21:27 -- nvmf/common.sh@297 -- # mlx=() 00:18:31.045 17:21:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:31.045 17:21:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.045 17:21:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.045 17:21:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.045 17:21:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.045 17:21:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.045 17:21:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.045 17:21:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.045 17:21:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.045 17:21:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.045 17:21:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.045 17:21:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.045 17:21:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:31.045 17:21:27 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:31.045 17:21:27 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:31.045 17:21:27 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:31.045 17:21:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:31.045 17:21:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:31.045 17:21:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:31.045 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:31.045 17:21:27 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:31.045 17:21:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:31.045 17:21:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:31.045 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:31.045 17:21:27 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:31.045 17:21:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:31.045 17:21:27 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:31.045 17:21:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.045 17:21:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:31.045 17:21:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.045 17:21:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:31.045 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:31.045 17:21:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.045 17:21:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:31.045 17:21:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.045 17:21:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:31.045 17:21:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.045 17:21:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:31.045 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:31.045 17:21:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.045 17:21:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:31.045 17:21:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:31.045 17:21:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:31.045 17:21:27 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:31.045 17:21:27 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:31.045 17:21:27 -- nvmf/common.sh@57 -- # uname 00:18:31.045 17:21:27 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:31.045 17:21:27 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:31.045 17:21:27 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:31.045 17:21:27 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:31.045 17:21:27 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:31.045 17:21:27 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:31.045 17:21:27 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:31.045 17:21:27 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:31.045 17:21:27 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:31.045 17:21:27 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:31.045 17:21:27 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:31.045 17:21:27 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:31.046 17:21:27 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:31.046 17:21:27 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:31.046 17:21:27 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:31.046 17:21:27 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:31.046 17:21:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:31.046 17:21:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.046 17:21:27 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:31.046 17:21:27 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:31.046 17:21:27 -- nvmf/common.sh@104 -- # continue 2 00:18:31.046 17:21:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:31.046 17:21:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.046 17:21:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:31.046 17:21:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.046 17:21:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:31.046 17:21:27 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:31.046 17:21:27 -- nvmf/common.sh@104 -- # continue 2 00:18:31.046 17:21:27 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:31.046 17:21:27 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:31.046 17:21:27 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:31.046 17:21:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:31.046 17:21:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:31.046 17:21:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:31.046 17:21:27 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:31.046 17:21:27 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:31.046 17:21:27 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:31.046 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:31.046 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:31.046 altname enp217s0f0np0 00:18:31.046 altname ens818f0np0 00:18:31.046 inet 192.168.100.8/24 scope global mlx_0_0 00:18:31.046 valid_lft forever preferred_lft forever 00:18:31.046 17:21:27 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:31.046 17:21:27 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:31.046 17:21:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:31.046 17:21:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:31.046 17:21:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:31.046 17:21:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:31.046 17:21:27 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:31.046 17:21:27 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:31.046 17:21:27 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:31.046 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:31.046 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:31.046 altname enp217s0f1np1 00:18:31.046 altname ens818f1np1 00:18:31.046 inet 192.168.100.9/24 scope global mlx_0_1 00:18:31.046 valid_lft forever preferred_lft forever 00:18:31.046 17:21:27 -- nvmf/common.sh@410 -- # return 0 00:18:31.046 17:21:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:31.046 17:21:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:31.046 17:21:27 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:31.046 17:21:27 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:31.046 17:21:27 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:31.046 17:21:27 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:31.046 17:21:27 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:31.046 17:21:27 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:31.046 17:21:27 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:31.046 17:21:27 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:31.046 17:21:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:31.046 17:21:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.046 17:21:27 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:31.046 17:21:27 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:31.046 17:21:27 -- nvmf/common.sh@104 -- # continue 2 00:18:31.046 17:21:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:31.046 17:21:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.046 17:21:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:31.046 17:21:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.046 17:21:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:31.046 17:21:27 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:31.046 17:21:27 -- nvmf/common.sh@104 -- # continue 2 00:18:31.046 17:21:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:31.046 17:21:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:31.046 17:21:27 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:31.046 17:21:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:31.046 17:21:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:31.046 17:21:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:31.046 17:21:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:31.046 17:21:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:31.046 17:21:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:31.046 17:21:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:31.046 17:21:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:31.046 17:21:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:31.046 17:21:27 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:31.046 192.168.100.9' 00:18:31.046 17:21:27 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:31.046 192.168.100.9' 00:18:31.046 17:21:27 -- nvmf/common.sh@445 -- # head -n 1 00:18:31.046 17:21:27 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:31.046 17:21:27 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:31.046 192.168.100.9' 00:18:31.046 17:21:27 -- nvmf/common.sh@446 -- # tail -n +2 00:18:31.046 17:21:27 -- nvmf/common.sh@446 -- # head -n 1 00:18:31.046 17:21:27 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:31.046 17:21:27 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:31.046 17:21:27 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:31.046 17:21:27 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:31.046 17:21:27 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:31.046 17:21:27 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:31.046 17:21:27 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:31.046 17:21:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:31.046 17:21:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.046 17:21:27 -- common/autotest_common.sh@10 -- # set +x 00:18:31.046 17:21:27 -- nvmf/common.sh@469 -- # nvmfpid=1353681 00:18:31.046 17:21:27 -- nvmf/common.sh@470 -- # waitforlisten 1353681 00:18:31.046 17:21:27 -- common/autotest_common.sh@829 -- # '[' -z 1353681 ']' 00:18:31.046 17:21:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.046 17:21:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.046 17:21:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.046 17:21:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.046 17:21:27 -- common/autotest_common.sh@10 -- # set +x 00:18:31.046 17:21:27 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:31.046 [2024-12-14 17:21:27.535604] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:31.046 [2024-12-14 17:21:27.535655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.046 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.046 [2024-12-14 17:21:27.606622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.046 [2024-12-14 17:21:27.643676] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:31.046 [2024-12-14 17:21:27.643785] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.046 [2024-12-14 17:21:27.643795] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.046 [2024-12-14 17:21:27.643803] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.046 [2024-12-14 17:21:27.643829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.985 17:21:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.985 17:21:28 -- common/autotest_common.sh@862 -- # return 0 00:18:31.985 17:21:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:31.985 17:21:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.985 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:18:31.985 17:21:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.985 17:21:28 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:31.985 [2024-12-14 17:21:28.548401] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x720240/0x7246f0) succeed. 00:18:31.985 [2024-12-14 17:21:28.557774] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7216f0/0x765d90) succeed. 00:18:31.985 17:21:28 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:31.985 17:21:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:31.985 17:21:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:31.985 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:18:31.985 ************************************ 00:18:31.985 START TEST lvs_grow_clean 00:18:31.985 ************************************ 00:18:31.985 17:21:28 -- common/autotest_common.sh@1114 -- # lvs_grow 00:18:31.985 17:21:28 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:31.985 17:21:28 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:31.985 17:21:28 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:31.985 17:21:28 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:31.985 17:21:28 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:31.985 17:21:28 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:31.985 17:21:28 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:31.985 17:21:28 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:31.985 17:21:28 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:32.244 17:21:28 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:32.244 17:21:28 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:32.503 17:21:29 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a9964d46-0cd9-4102-af21-87c95d1759e3 00:18:32.503 17:21:29 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9964d46-0cd9-4102-af21-87c95d1759e3 00:18:32.503 17:21:29 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:32.762 17:21:29 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:32.762 17:21:29 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:32.762 17:21:29 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a9964d46-0cd9-4102-af21-87c95d1759e3 lvol 150 00:18:32.762 17:21:29 -- target/nvmf_lvs_grow.sh@33 -- # lvol=1257a5f8-e56d-423e-aa17-2cb2f0416c67 00:18:32.763 17:21:29 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:32.763 17:21:29 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:33.022 [2024-12-14 17:21:29.515238] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:33.022 [2024-12-14 17:21:29.515284] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:33.022 true 00:18:33.022 17:21:29 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9964d46-0cd9-4102-af21-87c95d1759e3 00:18:33.022 17:21:29 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:33.022 17:21:29 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:33.022 17:21:29 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:33.282 17:21:29 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1257a5f8-e56d-423e-aa17-2cb2f0416c67 00:18:33.541 17:21:30 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:33.541 [2024-12-14 17:21:30.217557] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:33.800 17:21:30 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:33.800 17:21:30 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1354154 00:18:33.800 17:21:30 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:33.800 17:21:30 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1354154 /var/tmp/bdevperf.sock 00:18:33.800 17:21:30 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:33.800 17:21:30 -- common/autotest_common.sh@829 -- # '[' -z 1354154 ']' 00:18:33.800 17:21:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.800 17:21:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.800 17:21:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.800 17:21:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.800 17:21:30 -- common/autotest_common.sh@10 -- # set +x 00:18:33.800 [2024-12-14 17:21:30.451607] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:33.800 [2024-12-14 17:21:30.451665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354154 ] 00:18:33.800 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.059 [2024-12-14 17:21:30.522270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.059 [2024-12-14 17:21:30.559437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.627 17:21:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.627 17:21:31 -- common/autotest_common.sh@862 -- # return 0 00:18:34.627 17:21:31 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:34.886 Nvme0n1 00:18:34.886 17:21:31 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:35.145 [ 00:18:35.145 { 00:18:35.145 "name": "Nvme0n1", 00:18:35.145 "aliases": [ 00:18:35.145 "1257a5f8-e56d-423e-aa17-2cb2f0416c67" 00:18:35.145 ], 00:18:35.145 "product_name": "NVMe disk", 00:18:35.145 "block_size": 4096, 00:18:35.145 "num_blocks": 38912, 00:18:35.145 "uuid": "1257a5f8-e56d-423e-aa17-2cb2f0416c67", 00:18:35.145 "assigned_rate_limits": { 00:18:35.145 "rw_ios_per_sec": 0, 00:18:35.145 "rw_mbytes_per_sec": 0, 00:18:35.145 "r_mbytes_per_sec": 0, 00:18:35.145 "w_mbytes_per_sec": 0 00:18:35.145 }, 00:18:35.145 "claimed": false, 00:18:35.145 "zoned": false, 00:18:35.145 "supported_io_types": { 00:18:35.145 "read": true, 00:18:35.145 "write": true, 00:18:35.145 "unmap": true, 00:18:35.145 "write_zeroes": true, 00:18:35.145 "flush": true, 00:18:35.145 "reset": true, 00:18:35.145 "compare": true, 00:18:35.145 "compare_and_write": true, 00:18:35.145 "abort": true, 00:18:35.145 "nvme_admin": true, 00:18:35.145 "nvme_io": true 00:18:35.145 }, 00:18:35.145 "memory_domains": [ 00:18:35.145 { 00:18:35.145 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:35.145 "dma_device_type": 0 00:18:35.145 } 00:18:35.145 ], 00:18:35.145 "driver_specific": { 00:18:35.145 "nvme": [ 00:18:35.145 { 00:18:35.145 "trid": { 00:18:35.145 "trtype": "RDMA", 00:18:35.145 "adrfam": "IPv4", 00:18:35.145 "traddr": "192.168.100.8", 00:18:35.145 "trsvcid": "4420", 00:18:35.145 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:35.145 }, 00:18:35.145 "ctrlr_data": { 00:18:35.145 "cntlid": 1, 00:18:35.145 "vendor_id": "0x8086", 00:18:35.145 "model_number": "SPDK bdev Controller", 00:18:35.145 "serial_number": "SPDK0", 00:18:35.146 "firmware_revision": "24.01.1", 00:18:35.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:35.146 "oacs": { 00:18:35.146 "security": 0, 00:18:35.146 "format": 0, 00:18:35.146 "firmware": 0, 00:18:35.146 "ns_manage": 0 00:18:35.146 }, 00:18:35.146 "multi_ctrlr": true, 00:18:35.146 "ana_reporting": false 00:18:35.146 }, 00:18:35.146 "vs": { 00:18:35.146 "nvme_version": "1.3" 00:18:35.146 }, 00:18:35.146 "ns_data": { 00:18:35.146 "id": 1, 00:18:35.146 "can_share": true 00:18:35.146 } 00:18:35.146 } 00:18:35.146 ], 00:18:35.146 "mp_policy": "active_passive" 00:18:35.146 } 00:18:35.146 } 00:18:35.146 ] 00:18:35.146 17:21:31 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1354423 00:18:35.146 17:21:31 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:35.146 17:21:31 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.146 Running I/O for 10 seconds... 00:18:36.083 Latency(us) 00:18:36.083 [2024-12-14T16:21:32.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.083 [2024-12-14T16:21:32.767Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.083 Nvme0n1 : 1.00 36577.00 142.88 0.00 0.00 0.00 0.00 0.00 00:18:36.083 [2024-12-14T16:21:32.767Z] =================================================================================================================== 00:18:36.083 [2024-12-14T16:21:32.767Z] Total : 36577.00 142.88 0.00 0.00 0.00 0.00 0.00 00:18:36.083 00:18:37.020 17:21:33 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a9964d46-0cd9-4102-af21-87c95d1759e3 00:18:37.280 [2024-12-14T16:21:33.964Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.280 Nvme0n1 : 2.00 37008.00 144.56 0.00 0.00 0.00 0.00 0.00 00:18:37.280 [2024-12-14T16:21:33.964Z] =================================================================================================================== 00:18:37.280 [2024-12-14T16:21:33.964Z] Total : 37008.00 144.56 0.00 0.00 0.00 0.00 0.00 00:18:37.280 00:18:37.280 true 00:18:37.280 17:21:33 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9964d46-0cd9-4102-af21-87c95d1759e3 00:18:37.280 17:21:33 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:37.539 17:21:34 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:37.539 17:21:34 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:37.539 17:21:34 -- target/nvmf_lvs_grow.sh@65 -- # wait 1354423 00:18:38.108 [2024-12-14T16:21:34.792Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:38.108 Nvme0n1 : 3.00 37142.67 145.09 0.00 0.00 0.00 0.00 0.00 00:18:38.108 [2024-12-14T16:21:34.792Z] =================================================================================================================== 00:18:38.108 [2024-12-14T16:21:34.792Z] Total : 37142.67 145.09 0.00 0.00 0.00 0.00 0.00 00:18:38.108 00:18:39.485 [2024-12-14T16:21:36.169Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:39.485 Nvme0n1 : 4.00 37255.75 145.53 0.00 0.00 0.00 0.00 0.00 00:18:39.485 [2024-12-14T16:21:36.169Z] =================================================================================================================== 00:18:39.485 [2024-12-14T16:21:36.169Z] Total : 37255.75 145.53 0.00 0.00 0.00 0.00 0.00 00:18:39.485 00:18:40.422 [2024-12-14T16:21:37.106Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.422 Nvme0n1 : 5.00 37369.60 145.97 0.00 0.00 0.00 0.00 0.00 00:18:40.422 [2024-12-14T16:21:37.106Z] =================================================================================================================== 00:18:40.422 [2024-12-14T16:21:37.106Z] Total : 37369.60 145.97 0.00 0.00 0.00 0.00 0.00 00:18:40.422 00:18:41.359 [2024-12-14T16:21:38.043Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:41.359 Nvme0n1 : 6.00 37439.67 146.25 0.00 0.00 0.00 0.00 0.00 00:18:41.359 [2024-12-14T16:21:38.043Z] =================================================================================================================== 00:18:41.359 [2024-12-14T16:21:38.043Z] Total : 37439.67 146.25 0.00 0.00 0.00 0.00 0.00 00:18:41.359 00:18:42.297 [2024-12-14T16:21:38.981Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:42.297 Nvme0n1 : 7.00 37486.43 146.43 0.00 0.00 0.00 0.00 0.00 00:18:42.297 [2024-12-14T16:21:38.981Z] =================================================================================================================== 00:18:42.297 [2024-12-14T16:21:38.981Z] Total : 37486.43 146.43 0.00 0.00 0.00 0.00 0.00 00:18:42.297 00:18:43.234 [2024-12-14T16:21:39.918Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:43.234 Nvme0n1 : 8.00 37535.88 146.62 0.00 0.00 0.00 0.00 0.00 00:18:43.234 [2024-12-14T16:21:39.918Z] =================================================================================================================== 00:18:43.234 [2024-12-14T16:21:39.918Z] Total : 37535.88 146.62 0.00 0.00 0.00 0.00 0.00 00:18:43.234 00:18:44.172 [2024-12-14T16:21:40.856Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:44.172 Nvme0n1 : 9.00 37522.22 146.57 0.00 0.00 0.00 0.00 0.00 00:18:44.172 [2024-12-14T16:21:40.856Z] =================================================================================================================== 00:18:44.172 [2024-12-14T16:21:40.856Z] Total : 37522.22 146.57 0.00 0.00 0.00 0.00 0.00 00:18:44.172 00:18:45.111 [2024-12-14T16:21:41.795Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:45.111 Nvme0n1 : 10.00 37551.80 146.69 0.00 0.00 0.00 0.00 0.00 00:18:45.111 [2024-12-14T16:21:41.795Z] =================================================================================================================== 00:18:45.111 [2024-12-14T16:21:41.795Z] Total : 37551.80 146.69 0.00 0.00 0.00 0.00 0.00 00:18:45.111 00:18:45.111 00:18:45.111 Latency(us) 00:18:45.111 [2024-12-14T16:21:41.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.111 [2024-12-14T16:21:41.795Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:45.111 Nvme0n1 : 10.01 37551.86 146.69 0.00 0.00 3406.26 2162.69 16043.21 00:18:45.111 [2024-12-14T16:21:41.795Z] =================================================================================================================== 00:18:45.111 [2024-12-14T16:21:41.795Z] Total : 37551.86 146.69 0.00 0.00 3406.26 2162.69 16043.21 00:18:45.371 0 00:18:45.371 17:21:41 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1354154 00:18:45.371 17:21:41 -- common/autotest_common.sh@936 -- # '[' -z 1354154 ']' 00:18:45.371 17:21:41 -- common/autotest_common.sh@940 -- # kill -0 1354154 00:18:45.371 17:21:41 -- common/autotest_common.sh@941 -- # uname 00:18:45.371 17:21:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:45.371 17:21:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1354154 00:18:45.371 17:21:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:45.371 17:21:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:45.371 17:21:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1354154' 00:18:45.371 killing process with pid 1354154 00:18:45.371 17:21:41 -- common/autotest_common.sh@955 -- # kill 1354154 00:18:45.371 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.371 00:18:45.371 Latency(us) 00:18:45.371 [2024-12-14T16:21:42.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.371 [2024-12-14T16:21:42.055Z] =================================================================================================================== 00:18:45.371 [2024-12-14T16:21:42.055Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.371 17:21:41 -- common/autotest_common.sh@960 -- # wait 1354154 00:18:45.371 17:21:42 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:45.629 17:21:42 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9964d46-0cd9-4102-af21-87c95d1759e3 00:18:45.630 17:21:42 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:45.888 17:21:42 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:45.888 17:21:42 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:45.888 17:21:42 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:45.888 [2024-12-14 17:21:42.562335] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:46.148 17:21:42 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9964d46-0cd9-4102-af21-87c95d1759e3 00:18:46.148 17:21:42 -- common/autotest_common.sh@650 -- # local es=0 00:18:46.148 17:21:42 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9964d46-0cd9-4102-af21-87c95d1759e3 00:18:46.148 17:21:42 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:46.148 17:21:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:46.148 17:21:42 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:46.148 17:21:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:46.148 17:21:42 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:46.148 17:21:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:46.148 17:21:42 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:46.148 17:21:42 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:46.148 17:21:42 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9964d46-0cd9-4102-af21-87c95d1759e3 00:18:46.148 request: 00:18:46.148 { 00:18:46.148 "uuid": "a9964d46-0cd9-4102-af21-87c95d1759e3", 00:18:46.148 "method": "bdev_lvol_get_lvstores", 00:18:46.148 "req_id": 1 00:18:46.148 } 00:18:46.148 Got JSON-RPC error response 00:18:46.148 response: 00:18:46.148 { 00:18:46.148 "code": -19, 00:18:46.148 "message": "No such device" 00:18:46.148 } 00:18:46.148 17:21:42 -- common/autotest_common.sh@653 -- # es=1 00:18:46.148 17:21:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:46.148 17:21:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:46.148 17:21:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:46.148 17:21:42 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:46.408 aio_bdev 00:18:46.408 17:21:42 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 1257a5f8-e56d-423e-aa17-2cb2f0416c67 00:18:46.408 17:21:42 -- common/autotest_common.sh@897 -- # local bdev_name=1257a5f8-e56d-423e-aa17-2cb2f0416c67 00:18:46.408 17:21:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:46.408 17:21:42 -- common/autotest_common.sh@899 -- # local i 00:18:46.408 17:21:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:46.408 17:21:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:46.408 17:21:42 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:46.668 17:21:43 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1257a5f8-e56d-423e-aa17-2cb2f0416c67 -t 2000 00:18:46.668 [ 00:18:46.668 { 00:18:46.668 "name": "1257a5f8-e56d-423e-aa17-2cb2f0416c67", 00:18:46.668 "aliases": [ 00:18:46.668 "lvs/lvol" 00:18:46.668 ], 00:18:46.668 "product_name": "Logical Volume", 00:18:46.668 "block_size": 4096, 00:18:46.668 "num_blocks": 38912, 00:18:46.668 "uuid": "1257a5f8-e56d-423e-aa17-2cb2f0416c67", 00:18:46.668 "assigned_rate_limits": { 00:18:46.668 "rw_ios_per_sec": 0, 00:18:46.668 "rw_mbytes_per_sec": 0, 00:18:46.668 "r_mbytes_per_sec": 0, 00:18:46.668 "w_mbytes_per_sec": 0 00:18:46.668 }, 00:18:46.668 "claimed": false, 00:18:46.668 "zoned": false, 00:18:46.668 "supported_io_types": { 00:18:46.668 "read": true, 00:18:46.668 "write": true, 00:18:46.668 "unmap": true, 00:18:46.668 "write_zeroes": true, 00:18:46.668 "flush": false, 00:18:46.668 "reset": true, 00:18:46.668 "compare": false, 00:18:46.668 "compare_and_write": false, 00:18:46.668 "abort": false, 00:18:46.668 "nvme_admin": false, 00:18:46.668 "nvme_io": false 00:18:46.668 }, 00:18:46.668 "driver_specific": { 00:18:46.668 "lvol": { 00:18:46.668 "lvol_store_uuid": "a9964d46-0cd9-4102-af21-87c95d1759e3", 00:18:46.668 "base_bdev": "aio_bdev", 00:18:46.668 "thin_provision": false, 00:18:46.668 "snapshot": false, 00:18:46.668 "clone": false, 00:18:46.668 "esnap_clone": false 00:18:46.668 } 00:18:46.668 } 00:18:46.668 } 00:18:46.668 ] 00:18:46.668 17:21:43 -- common/autotest_common.sh@905 -- # return 0 00:18:46.668 17:21:43 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9964d46-0cd9-4102-af21-87c95d1759e3 00:18:46.668 17:21:43 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:46.928 17:21:43 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:46.928 17:21:43 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9964d46-0cd9-4102-af21-87c95d1759e3 00:18:46.928 17:21:43 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:47.188 17:21:43 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:47.188 17:21:43 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1257a5f8-e56d-423e-aa17-2cb2f0416c67 00:18:47.188 17:21:43 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a9964d46-0cd9-4102-af21-87c95d1759e3 00:18:47.447 17:21:43 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:47.707 17:21:44 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:47.707 00:18:47.707 real 0m15.560s 00:18:47.707 user 0m15.559s 00:18:47.707 sys 0m1.042s 00:18:47.707 17:21:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:47.707 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:18:47.707 ************************************ 00:18:47.707 END TEST lvs_grow_clean 00:18:47.707 ************************************ 00:18:47.707 17:21:44 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:47.707 17:21:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:47.707 17:21:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:47.707 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:18:47.707 ************************************ 00:18:47.707 START TEST lvs_grow_dirty 00:18:47.707 ************************************ 00:18:47.707 17:21:44 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:18:47.707 17:21:44 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:47.707 17:21:44 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:47.707 17:21:44 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:47.707 17:21:44 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:47.707 17:21:44 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:47.707 17:21:44 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:47.707 17:21:44 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:47.707 17:21:44 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:47.707 17:21:44 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:47.966 17:21:44 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:47.966 17:21:44 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:47.966 17:21:44 -- target/nvmf_lvs_grow.sh@28 -- # lvs=772a6a7c-e877-46da-b108-2fc57678f587 00:18:47.966 17:21:44 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a6a7c-e877-46da-b108-2fc57678f587 00:18:47.966 17:21:44 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:48.225 17:21:44 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:48.225 17:21:44 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:48.225 17:21:44 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 772a6a7c-e877-46da-b108-2fc57678f587 lvol 150 00:18:48.485 17:21:44 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a89e7d93-f26d-4ef3-8fcb-6625caffc7c6 00:18:48.485 17:21:44 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:48.485 17:21:44 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:48.485 [2024-12-14 17:21:45.130247] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:48.485 [2024-12-14 17:21:45.130295] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:48.485 true 00:18:48.485 17:21:45 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a6a7c-e877-46da-b108-2fc57678f587 00:18:48.485 17:21:45 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:48.745 17:21:45 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:48.745 17:21:45 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:49.004 17:21:45 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a89e7d93-f26d-4ef3-8fcb-6625caffc7c6 00:18:49.004 17:21:45 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:49.263 17:21:45 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:49.523 17:21:45 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1356914 00:18:49.523 17:21:45 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:49.523 17:21:45 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1356914 /var/tmp/bdevperf.sock 00:18:49.523 17:21:45 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:49.523 17:21:45 -- common/autotest_common.sh@829 -- # '[' -z 1356914 ']' 00:18:49.523 17:21:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.523 17:21:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.523 17:21:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.523 17:21:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.523 17:21:45 -- common/autotest_common.sh@10 -- # set +x 00:18:49.523 [2024-12-14 17:21:46.027886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:49.523 [2024-12-14 17:21:46.027941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356914 ] 00:18:49.523 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.523 [2024-12-14 17:21:46.097494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.523 [2024-12-14 17:21:46.133898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.505 17:21:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.505 17:21:46 -- common/autotest_common.sh@862 -- # return 0 00:18:50.506 17:21:46 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:50.506 Nvme0n1 00:18:50.506 17:21:47 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:50.825 [ 00:18:50.825 { 00:18:50.825 "name": "Nvme0n1", 00:18:50.825 "aliases": [ 00:18:50.825 "a89e7d93-f26d-4ef3-8fcb-6625caffc7c6" 00:18:50.825 ], 00:18:50.825 "product_name": "NVMe disk", 00:18:50.825 "block_size": 4096, 00:18:50.825 "num_blocks": 38912, 00:18:50.825 "uuid": "a89e7d93-f26d-4ef3-8fcb-6625caffc7c6", 00:18:50.825 "assigned_rate_limits": { 00:18:50.825 "rw_ios_per_sec": 0, 00:18:50.825 "rw_mbytes_per_sec": 0, 00:18:50.825 "r_mbytes_per_sec": 0, 00:18:50.825 "w_mbytes_per_sec": 0 00:18:50.825 }, 00:18:50.825 "claimed": false, 00:18:50.825 "zoned": false, 00:18:50.825 "supported_io_types": { 00:18:50.825 "read": true, 00:18:50.825 "write": true, 00:18:50.825 "unmap": true, 00:18:50.825 "write_zeroes": true, 00:18:50.825 "flush": true, 00:18:50.825 "reset": true, 00:18:50.825 "compare": true, 00:18:50.825 "compare_and_write": true, 00:18:50.825 "abort": true, 00:18:50.825 "nvme_admin": true, 00:18:50.825 "nvme_io": true 00:18:50.825 }, 00:18:50.825 "memory_domains": [ 00:18:50.825 { 00:18:50.825 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:50.825 "dma_device_type": 0 00:18:50.825 } 00:18:50.825 ], 00:18:50.825 "driver_specific": { 00:18:50.825 "nvme": [ 00:18:50.825 { 00:18:50.825 "trid": { 00:18:50.825 "trtype": "RDMA", 00:18:50.825 "adrfam": "IPv4", 00:18:50.825 "traddr": "192.168.100.8", 00:18:50.825 "trsvcid": "4420", 00:18:50.825 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:50.825 }, 00:18:50.825 "ctrlr_data": { 00:18:50.825 "cntlid": 1, 00:18:50.825 "vendor_id": "0x8086", 00:18:50.825 "model_number": "SPDK bdev Controller", 00:18:50.825 "serial_number": "SPDK0", 00:18:50.825 "firmware_revision": "24.01.1", 00:18:50.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:50.825 "oacs": { 00:18:50.825 "security": 0, 00:18:50.825 "format": 0, 00:18:50.825 "firmware": 0, 00:18:50.825 "ns_manage": 0 00:18:50.825 }, 00:18:50.825 "multi_ctrlr": true, 00:18:50.825 "ana_reporting": false 00:18:50.825 }, 00:18:50.825 "vs": { 00:18:50.825 "nvme_version": "1.3" 00:18:50.825 }, 00:18:50.825 "ns_data": { 00:18:50.825 "id": 1, 00:18:50.825 "can_share": true 00:18:50.825 } 00:18:50.825 } 00:18:50.825 ], 00:18:50.825 "mp_policy": "active_passive" 00:18:50.825 } 00:18:50.825 } 00:18:50.825 ] 00:18:50.825 17:21:47 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1357183 00:18:50.825 17:21:47 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:50.825 17:21:47 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.825 Running I/O for 10 seconds... 00:18:51.763 Latency(us) 00:18:51.763 [2024-12-14T16:21:48.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.763 [2024-12-14T16:21:48.447Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.763 Nvme0n1 : 1.00 36358.00 142.02 0.00 0.00 0.00 0.00 0.00 00:18:51.763 [2024-12-14T16:21:48.447Z] =================================================================================================================== 00:18:51.763 [2024-12-14T16:21:48.447Z] Total : 36358.00 142.02 0.00 0.00 0.00 0.00 0.00 00:18:51.763 00:18:52.701 17:21:49 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 772a6a7c-e877-46da-b108-2fc57678f587 00:18:52.701 [2024-12-14T16:21:49.385Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.701 Nvme0n1 : 2.00 36862.00 143.99 0.00 0.00 0.00 0.00 0.00 00:18:52.701 [2024-12-14T16:21:49.385Z] =================================================================================================================== 00:18:52.701 [2024-12-14T16:21:49.385Z] Total : 36862.00 143.99 0.00 0.00 0.00 0.00 0.00 00:18:52.701 00:18:52.960 true 00:18:52.960 17:21:49 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a6a7c-e877-46da-b108-2fc57678f587 00:18:52.960 17:21:49 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:52.960 17:21:49 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:52.960 17:21:49 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:52.960 17:21:49 -- target/nvmf_lvs_grow.sh@65 -- # wait 1357183 00:18:53.898 [2024-12-14T16:21:50.582Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.898 Nvme0n1 : 3.00 36916.00 144.20 0.00 0.00 0.00 0.00 0.00 00:18:53.898 [2024-12-14T16:21:50.582Z] =================================================================================================================== 00:18:53.898 [2024-12-14T16:21:50.582Z] Total : 36916.00 144.20 0.00 0.00 0.00 0.00 0.00 00:18:53.898 00:18:54.836 [2024-12-14T16:21:51.520Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.836 Nvme0n1 : 4.00 36896.75 144.13 0.00 0.00 0.00 0.00 0.00 00:18:54.836 [2024-12-14T16:21:51.520Z] =================================================================================================================== 00:18:54.836 [2024-12-14T16:21:51.520Z] Total : 36896.75 144.13 0.00 0.00 0.00 0.00 0.00 00:18:54.836 00:18:55.774 [2024-12-14T16:21:52.458Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.774 Nvme0n1 : 5.00 37036.00 144.67 0.00 0.00 0.00 0.00 0.00 00:18:55.774 [2024-12-14T16:21:52.458Z] =================================================================================================================== 00:18:55.774 [2024-12-14T16:21:52.458Z] Total : 37036.00 144.67 0.00 0.00 0.00 0.00 0.00 00:18:55.774 00:18:56.714 [2024-12-14T16:21:53.398Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:56.714 Nvme0n1 : 6.00 37135.67 145.06 0.00 0.00 0.00 0.00 0.00 00:18:56.714 [2024-12-14T16:21:53.398Z] =================================================================================================================== 00:18:56.714 [2024-12-14T16:21:53.398Z] Total : 37135.67 145.06 0.00 0.00 0.00 0.00 0.00 00:18:56.714 00:18:58.092 [2024-12-14T16:21:54.776Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:58.092 Nvme0n1 : 7.00 37224.71 145.41 0.00 0.00 0.00 0.00 0.00 00:18:58.092 [2024-12-14T16:21:54.776Z] =================================================================================================================== 00:18:58.092 [2024-12-14T16:21:54.776Z] Total : 37224.71 145.41 0.00 0.00 0.00 0.00 0.00 00:18:58.092 00:18:59.029 [2024-12-14T16:21:55.713Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:59.029 Nvme0n1 : 8.00 37291.75 145.67 0.00 0.00 0.00 0.00 0.00 00:18:59.029 [2024-12-14T16:21:55.713Z] =================================================================================================================== 00:18:59.029 [2024-12-14T16:21:55.713Z] Total : 37291.75 145.67 0.00 0.00 0.00 0.00 0.00 00:18:59.029 00:18:59.972 [2024-12-14T16:21:56.656Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:59.972 Nvme0n1 : 9.00 37343.67 145.87 0.00 0.00 0.00 0.00 0.00 00:18:59.972 [2024-12-14T16:21:56.656Z] =================================================================================================================== 00:18:59.972 [2024-12-14T16:21:56.656Z] Total : 37343.67 145.87 0.00 0.00 0.00 0.00 0.00 00:18:59.972 00:19:00.910 [2024-12-14T16:21:57.594Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:00.910 Nvme0n1 : 10.00 37388.40 146.05 0.00 0.00 0.00 0.00 0.00 00:19:00.910 [2024-12-14T16:21:57.594Z] =================================================================================================================== 00:19:00.910 [2024-12-14T16:21:57.594Z] Total : 37388.40 146.05 0.00 0.00 0.00 0.00 0.00 00:19:00.910 00:19:00.910 00:19:00.910 Latency(us) 00:19:00.910 [2024-12-14T16:21:57.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.910 [2024-12-14T16:21:57.594Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:00.910 Nvme0n1 : 10.00 37387.82 146.05 0.00 0.00 3421.17 2005.40 15728.64 00:19:00.910 [2024-12-14T16:21:57.594Z] =================================================================================================================== 00:19:00.910 [2024-12-14T16:21:57.594Z] Total : 37387.82 146.05 0.00 0.00 3421.17 2005.40 15728.64 00:19:00.910 0 00:19:00.910 17:21:57 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1356914 00:19:00.910 17:21:57 -- common/autotest_common.sh@936 -- # '[' -z 1356914 ']' 00:19:00.910 17:21:57 -- common/autotest_common.sh@940 -- # kill -0 1356914 00:19:00.910 17:21:57 -- common/autotest_common.sh@941 -- # uname 00:19:00.910 17:21:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:00.910 17:21:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1356914 00:19:00.910 17:21:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:00.910 17:21:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:00.910 17:21:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1356914' 00:19:00.910 killing process with pid 1356914 00:19:00.910 17:21:57 -- common/autotest_common.sh@955 -- # kill 1356914 00:19:00.910 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.910 00:19:00.910 Latency(us) 00:19:00.910 [2024-12-14T16:21:57.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.910 [2024-12-14T16:21:57.594Z] =================================================================================================================== 00:19:00.910 [2024-12-14T16:21:57.594Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.910 17:21:57 -- common/autotest_common.sh@960 -- # wait 1356914 00:19:01.169 17:21:57 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:01.169 17:21:57 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a6a7c-e877-46da-b108-2fc57678f587 00:19:01.169 17:21:57 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:01.428 17:21:58 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:01.428 17:21:58 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:19:01.428 17:21:58 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1353681 00:19:01.428 17:21:58 -- target/nvmf_lvs_grow.sh@74 -- # wait 1353681 00:19:01.428 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1353681 Killed "${NVMF_APP[@]}" "$@" 00:19:01.428 17:21:58 -- target/nvmf_lvs_grow.sh@74 -- # true 00:19:01.428 17:21:58 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:19:01.428 17:21:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:01.428 17:21:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:01.428 17:21:58 -- common/autotest_common.sh@10 -- # set +x 00:19:01.428 17:21:58 -- nvmf/common.sh@469 -- # nvmfpid=1359072 00:19:01.428 17:21:58 -- nvmf/common.sh@470 -- # waitforlisten 1359072 00:19:01.428 17:21:58 -- common/autotest_common.sh@829 -- # '[' -z 1359072 ']' 00:19:01.428 17:21:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.428 17:21:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.428 17:21:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.428 17:21:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.428 17:21:58 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:01.428 17:21:58 -- common/autotest_common.sh@10 -- # set +x 00:19:01.428 [2024-12-14 17:21:58.100879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:01.429 [2024-12-14 17:21:58.100934] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.688 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.688 [2024-12-14 17:21:58.172723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.688 [2024-12-14 17:21:58.208987] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:01.688 [2024-12-14 17:21:58.209113] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.688 [2024-12-14 17:21:58.209123] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.688 [2024-12-14 17:21:58.209132] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.688 [2024-12-14 17:21:58.209153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.255 17:21:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.255 17:21:58 -- common/autotest_common.sh@862 -- # return 0 00:19:02.255 17:21:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:02.255 17:21:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:02.255 17:21:58 -- common/autotest_common.sh@10 -- # set +x 00:19:02.514 17:21:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.514 17:21:58 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:02.514 [2024-12-14 17:21:59.125724] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:02.514 [2024-12-14 17:21:59.125829] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:02.514 [2024-12-14 17:21:59.125856] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:02.514 17:21:59 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:19:02.514 17:21:59 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev a89e7d93-f26d-4ef3-8fcb-6625caffc7c6 00:19:02.514 17:21:59 -- common/autotest_common.sh@897 -- # local bdev_name=a89e7d93-f26d-4ef3-8fcb-6625caffc7c6 00:19:02.514 17:21:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:02.514 17:21:59 -- common/autotest_common.sh@899 -- # local i 00:19:02.514 17:21:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:02.514 17:21:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:02.514 17:21:59 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:02.773 17:21:59 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a89e7d93-f26d-4ef3-8fcb-6625caffc7c6 -t 2000 00:19:03.032 [ 00:19:03.032 { 00:19:03.032 "name": "a89e7d93-f26d-4ef3-8fcb-6625caffc7c6", 00:19:03.032 "aliases": [ 00:19:03.032 "lvs/lvol" 00:19:03.032 ], 00:19:03.032 "product_name": "Logical Volume", 00:19:03.032 "block_size": 4096, 00:19:03.032 "num_blocks": 38912, 00:19:03.032 "uuid": "a89e7d93-f26d-4ef3-8fcb-6625caffc7c6", 00:19:03.032 "assigned_rate_limits": { 00:19:03.032 "rw_ios_per_sec": 0, 00:19:03.032 "rw_mbytes_per_sec": 0, 00:19:03.032 "r_mbytes_per_sec": 0, 00:19:03.032 "w_mbytes_per_sec": 0 00:19:03.032 }, 00:19:03.032 "claimed": false, 00:19:03.032 "zoned": false, 00:19:03.032 "supported_io_types": { 00:19:03.032 "read": true, 00:19:03.032 "write": true, 00:19:03.032 "unmap": true, 00:19:03.032 "write_zeroes": true, 00:19:03.032 "flush": false, 00:19:03.032 "reset": true, 00:19:03.032 "compare": false, 00:19:03.032 "compare_and_write": false, 00:19:03.032 "abort": false, 00:19:03.032 "nvme_admin": false, 00:19:03.032 "nvme_io": false 00:19:03.032 }, 00:19:03.032 "driver_specific": { 00:19:03.032 "lvol": { 00:19:03.032 "lvol_store_uuid": "772a6a7c-e877-46da-b108-2fc57678f587", 00:19:03.032 "base_bdev": "aio_bdev", 00:19:03.032 "thin_provision": false, 00:19:03.032 "snapshot": false, 00:19:03.032 "clone": false, 00:19:03.032 "esnap_clone": false 00:19:03.032 } 00:19:03.032 } 00:19:03.032 } 00:19:03.032 ] 00:19:03.032 17:21:59 -- common/autotest_common.sh@905 -- # return 0 00:19:03.032 17:21:59 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a6a7c-e877-46da-b108-2fc57678f587 00:19:03.032 17:21:59 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:19:03.032 17:21:59 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:19:03.032 17:21:59 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a6a7c-e877-46da-b108-2fc57678f587 00:19:03.032 17:21:59 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:19:03.291 17:21:59 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:19:03.291 17:21:59 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:03.550 [2024-12-14 17:22:00.006209] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:03.550 17:22:00 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a6a7c-e877-46da-b108-2fc57678f587 00:19:03.550 17:22:00 -- common/autotest_common.sh@650 -- # local es=0 00:19:03.550 17:22:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a6a7c-e877-46da-b108-2fc57678f587 00:19:03.550 17:22:00 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:03.550 17:22:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.550 17:22:00 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:03.550 17:22:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.550 17:22:00 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:03.550 17:22:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.550 17:22:00 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:03.550 17:22:00 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:19:03.550 17:22:00 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a6a7c-e877-46da-b108-2fc57678f587 00:19:03.550 request: 00:19:03.550 { 00:19:03.550 "uuid": "772a6a7c-e877-46da-b108-2fc57678f587", 00:19:03.550 "method": "bdev_lvol_get_lvstores", 00:19:03.550 "req_id": 1 00:19:03.550 } 00:19:03.550 Got JSON-RPC error response 00:19:03.550 response: 00:19:03.550 { 00:19:03.550 "code": -19, 00:19:03.550 "message": "No such device" 00:19:03.550 } 00:19:03.809 17:22:00 -- common/autotest_common.sh@653 -- # es=1 00:19:03.809 17:22:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:03.809 17:22:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:03.809 17:22:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:03.809 17:22:00 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:03.809 aio_bdev 00:19:03.809 17:22:00 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev a89e7d93-f26d-4ef3-8fcb-6625caffc7c6 00:19:03.809 17:22:00 -- common/autotest_common.sh@897 -- # local bdev_name=a89e7d93-f26d-4ef3-8fcb-6625caffc7c6 00:19:03.809 17:22:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:03.809 17:22:00 -- common/autotest_common.sh@899 -- # local i 00:19:03.809 17:22:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:03.809 17:22:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:03.809 17:22:00 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:04.068 17:22:00 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a89e7d93-f26d-4ef3-8fcb-6625caffc7c6 -t 2000 00:19:04.327 [ 00:19:04.327 { 00:19:04.327 "name": "a89e7d93-f26d-4ef3-8fcb-6625caffc7c6", 00:19:04.327 "aliases": [ 00:19:04.327 "lvs/lvol" 00:19:04.327 ], 00:19:04.327 "product_name": "Logical Volume", 00:19:04.327 "block_size": 4096, 00:19:04.327 "num_blocks": 38912, 00:19:04.327 "uuid": "a89e7d93-f26d-4ef3-8fcb-6625caffc7c6", 00:19:04.327 "assigned_rate_limits": { 00:19:04.328 "rw_ios_per_sec": 0, 00:19:04.328 "rw_mbytes_per_sec": 0, 00:19:04.328 "r_mbytes_per_sec": 0, 00:19:04.328 "w_mbytes_per_sec": 0 00:19:04.328 }, 00:19:04.328 "claimed": false, 00:19:04.328 "zoned": false, 00:19:04.328 "supported_io_types": { 00:19:04.328 "read": true, 00:19:04.328 "write": true, 00:19:04.328 "unmap": true, 00:19:04.328 "write_zeroes": true, 00:19:04.328 "flush": false, 00:19:04.328 "reset": true, 00:19:04.328 "compare": false, 00:19:04.328 "compare_and_write": false, 00:19:04.328 "abort": false, 00:19:04.328 "nvme_admin": false, 00:19:04.328 "nvme_io": false 00:19:04.328 }, 00:19:04.328 "driver_specific": { 00:19:04.328 "lvol": { 00:19:04.328 "lvol_store_uuid": "772a6a7c-e877-46da-b108-2fc57678f587", 00:19:04.328 "base_bdev": "aio_bdev", 00:19:04.328 "thin_provision": false, 00:19:04.328 "snapshot": false, 00:19:04.328 "clone": false, 00:19:04.328 "esnap_clone": false 00:19:04.328 } 00:19:04.328 } 00:19:04.328 } 00:19:04.328 ] 00:19:04.328 17:22:00 -- common/autotest_common.sh@905 -- # return 0 00:19:04.328 17:22:00 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a6a7c-e877-46da-b108-2fc57678f587 00:19:04.328 17:22:00 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:04.328 17:22:00 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:04.328 17:22:00 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 772a6a7c-e877-46da-b108-2fc57678f587 00:19:04.328 17:22:00 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:04.586 17:22:01 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:04.586 17:22:01 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a89e7d93-f26d-4ef3-8fcb-6625caffc7c6 00:19:04.845 17:22:01 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 772a6a7c-e877-46da-b108-2fc57678f587 00:19:04.845 17:22:01 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:05.104 17:22:01 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:05.104 00:19:05.104 real 0m17.463s 00:19:05.104 user 0m45.022s 00:19:05.104 sys 0m3.224s 00:19:05.104 17:22:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:05.104 17:22:01 -- common/autotest_common.sh@10 -- # set +x 00:19:05.104 ************************************ 00:19:05.104 END TEST lvs_grow_dirty 00:19:05.104 ************************************ 00:19:05.104 17:22:01 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:05.104 17:22:01 -- common/autotest_common.sh@806 -- # type=--id 00:19:05.104 17:22:01 -- common/autotest_common.sh@807 -- # id=0 00:19:05.104 17:22:01 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:05.104 17:22:01 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:05.104 17:22:01 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:05.104 17:22:01 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:05.104 17:22:01 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:05.104 17:22:01 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:05.104 nvmf_trace.0 00:19:05.104 17:22:01 -- common/autotest_common.sh@821 -- # return 0 00:19:05.104 17:22:01 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:05.104 17:22:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:05.104 17:22:01 -- nvmf/common.sh@116 -- # sync 00:19:05.104 17:22:01 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:05.104 17:22:01 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:05.104 17:22:01 -- nvmf/common.sh@119 -- # set +e 00:19:05.104 17:22:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:05.104 17:22:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:05.364 rmmod nvme_rdma 00:19:05.364 rmmod nvme_fabrics 00:19:05.364 17:22:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:05.364 17:22:01 -- nvmf/common.sh@123 -- # set -e 00:19:05.364 17:22:01 -- nvmf/common.sh@124 -- # return 0 00:19:05.364 17:22:01 -- nvmf/common.sh@477 -- # '[' -n 1359072 ']' 00:19:05.364 17:22:01 -- nvmf/common.sh@478 -- # killprocess 1359072 00:19:05.364 17:22:01 -- common/autotest_common.sh@936 -- # '[' -z 1359072 ']' 00:19:05.364 17:22:01 -- common/autotest_common.sh@940 -- # kill -0 1359072 00:19:05.364 17:22:01 -- common/autotest_common.sh@941 -- # uname 00:19:05.364 17:22:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:05.364 17:22:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1359072 00:19:05.364 17:22:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:05.364 17:22:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:05.364 17:22:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1359072' 00:19:05.364 killing process with pid 1359072 00:19:05.364 17:22:01 -- common/autotest_common.sh@955 -- # kill 1359072 00:19:05.364 17:22:01 -- common/autotest_common.sh@960 -- # wait 1359072 00:19:05.624 17:22:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:05.624 17:22:02 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:05.624 00:19:05.624 real 0m41.225s 00:19:05.624 user 1m6.815s 00:19:05.624 sys 0m9.659s 00:19:05.624 17:22:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:05.624 17:22:02 -- common/autotest_common.sh@10 -- # set +x 00:19:05.624 ************************************ 00:19:05.624 END TEST nvmf_lvs_grow 00:19:05.624 ************************************ 00:19:05.624 17:22:02 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:19:05.624 17:22:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:05.624 17:22:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:05.624 17:22:02 -- common/autotest_common.sh@10 -- # set +x 00:19:05.624 ************************************ 00:19:05.624 START TEST nvmf_bdev_io_wait 00:19:05.624 ************************************ 00:19:05.624 17:22:02 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:19:05.624 * Looking for test storage... 00:19:05.624 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:05.624 17:22:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:05.624 17:22:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:05.624 17:22:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:05.624 17:22:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:05.624 17:22:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:05.624 17:22:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:05.624 17:22:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:05.624 17:22:02 -- scripts/common.sh@335 -- # IFS=.-: 00:19:05.624 17:22:02 -- scripts/common.sh@335 -- # read -ra ver1 00:19:05.624 17:22:02 -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.624 17:22:02 -- scripts/common.sh@336 -- # read -ra ver2 00:19:05.624 17:22:02 -- scripts/common.sh@337 -- # local 'op=<' 00:19:05.624 17:22:02 -- scripts/common.sh@339 -- # ver1_l=2 00:19:05.624 17:22:02 -- scripts/common.sh@340 -- # ver2_l=1 00:19:05.624 17:22:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:05.624 17:22:02 -- scripts/common.sh@343 -- # case "$op" in 00:19:05.624 17:22:02 -- scripts/common.sh@344 -- # : 1 00:19:05.624 17:22:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:05.624 17:22:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.624 17:22:02 -- scripts/common.sh@364 -- # decimal 1 00:19:05.624 17:22:02 -- scripts/common.sh@352 -- # local d=1 00:19:05.624 17:22:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.624 17:22:02 -- scripts/common.sh@354 -- # echo 1 00:19:05.624 17:22:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:05.624 17:22:02 -- scripts/common.sh@365 -- # decimal 2 00:19:05.624 17:22:02 -- scripts/common.sh@352 -- # local d=2 00:19:05.624 17:22:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.624 17:22:02 -- scripts/common.sh@354 -- # echo 2 00:19:05.624 17:22:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:05.624 17:22:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:05.624 17:22:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:05.624 17:22:02 -- scripts/common.sh@367 -- # return 0 00:19:05.624 17:22:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.624 17:22:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:05.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.624 --rc genhtml_branch_coverage=1 00:19:05.624 --rc genhtml_function_coverage=1 00:19:05.624 --rc genhtml_legend=1 00:19:05.624 --rc geninfo_all_blocks=1 00:19:05.624 --rc geninfo_unexecuted_blocks=1 00:19:05.624 00:19:05.624 ' 00:19:05.624 17:22:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:05.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.624 --rc genhtml_branch_coverage=1 00:19:05.624 --rc genhtml_function_coverage=1 00:19:05.624 --rc genhtml_legend=1 00:19:05.624 --rc geninfo_all_blocks=1 00:19:05.624 --rc geninfo_unexecuted_blocks=1 00:19:05.624 00:19:05.624 ' 00:19:05.624 17:22:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:05.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.624 --rc genhtml_branch_coverage=1 00:19:05.624 --rc genhtml_function_coverage=1 00:19:05.624 --rc genhtml_legend=1 00:19:05.624 --rc geninfo_all_blocks=1 00:19:05.624 --rc geninfo_unexecuted_blocks=1 00:19:05.624 00:19:05.624 ' 00:19:05.624 17:22:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:05.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.624 --rc genhtml_branch_coverage=1 00:19:05.624 --rc genhtml_function_coverage=1 00:19:05.624 --rc genhtml_legend=1 00:19:05.624 --rc geninfo_all_blocks=1 00:19:05.625 --rc geninfo_unexecuted_blocks=1 00:19:05.625 00:19:05.625 ' 00:19:05.625 17:22:02 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.625 17:22:02 -- nvmf/common.sh@7 -- # uname -s 00:19:05.625 17:22:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.625 17:22:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.625 17:22:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.625 17:22:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.625 17:22:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.625 17:22:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.625 17:22:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.625 17:22:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.625 17:22:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.625 17:22:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.625 17:22:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:05.625 17:22:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:05.625 17:22:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.625 17:22:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.625 17:22:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.625 17:22:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:05.625 17:22:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.625 17:22:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.625 17:22:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.625 17:22:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.625 17:22:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.625 17:22:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.625 17:22:02 -- paths/export.sh@5 -- # export PATH 00:19:05.625 17:22:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.625 17:22:02 -- nvmf/common.sh@46 -- # : 0 00:19:05.625 17:22:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:05.625 17:22:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:05.625 17:22:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:05.625 17:22:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.625 17:22:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.625 17:22:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:05.625 17:22:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:05.625 17:22:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:05.625 17:22:02 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:05.625 17:22:02 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:05.625 17:22:02 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:05.625 17:22:02 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:05.625 17:22:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.625 17:22:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:05.625 17:22:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:05.625 17:22:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:05.625 17:22:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.625 17:22:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.625 17:22:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.625 17:22:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:05.625 17:22:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:05.625 17:22:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:05.625 17:22:02 -- common/autotest_common.sh@10 -- # set +x 00:19:12.197 17:22:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:12.197 17:22:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:12.197 17:22:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:12.197 17:22:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:12.197 17:22:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:12.197 17:22:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:12.197 17:22:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:12.197 17:22:08 -- nvmf/common.sh@294 -- # net_devs=() 00:19:12.197 17:22:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:12.197 17:22:08 -- nvmf/common.sh@295 -- # e810=() 00:19:12.197 17:22:08 -- nvmf/common.sh@295 -- # local -ga e810 00:19:12.197 17:22:08 -- nvmf/common.sh@296 -- # x722=() 00:19:12.197 17:22:08 -- nvmf/common.sh@296 -- # local -ga x722 00:19:12.197 17:22:08 -- nvmf/common.sh@297 -- # mlx=() 00:19:12.197 17:22:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:12.197 17:22:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.197 17:22:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.197 17:22:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.197 17:22:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.197 17:22:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.197 17:22:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.197 17:22:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.197 17:22:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.197 17:22:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.197 17:22:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.197 17:22:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.197 17:22:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:12.197 17:22:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:12.197 17:22:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:12.197 17:22:08 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:12.197 17:22:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:12.197 17:22:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:12.197 17:22:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:12.197 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:12.197 17:22:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:12.197 17:22:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:12.197 17:22:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:12.197 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:12.197 17:22:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:12.197 17:22:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:12.197 17:22:08 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:12.197 17:22:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.197 17:22:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:12.197 17:22:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.197 17:22:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:12.197 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:12.197 17:22:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.197 17:22:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:12.197 17:22:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.197 17:22:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:12.197 17:22:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.197 17:22:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:12.197 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:12.197 17:22:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.197 17:22:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:12.197 17:22:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:12.197 17:22:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:12.197 17:22:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:12.197 17:22:08 -- nvmf/common.sh@57 -- # uname 00:19:12.197 17:22:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:12.197 17:22:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:12.197 17:22:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:12.197 17:22:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:12.197 17:22:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:12.197 17:22:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:12.197 17:22:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:12.197 17:22:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:12.197 17:22:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:12.197 17:22:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:12.197 17:22:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:12.197 17:22:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:12.197 17:22:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:12.197 17:22:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:12.197 17:22:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:12.197 17:22:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:12.197 17:22:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:12.197 17:22:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.197 17:22:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:12.197 17:22:08 -- nvmf/common.sh@104 -- # continue 2 00:19:12.197 17:22:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:12.197 17:22:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.197 17:22:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.197 17:22:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:12.197 17:22:08 -- nvmf/common.sh@104 -- # continue 2 00:19:12.197 17:22:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:12.197 17:22:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:12.197 17:22:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:12.197 17:22:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:12.197 17:22:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:12.197 17:22:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:12.197 17:22:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:12.197 17:22:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:12.197 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:12.197 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:12.197 altname enp217s0f0np0 00:19:12.197 altname ens818f0np0 00:19:12.197 inet 192.168.100.8/24 scope global mlx_0_0 00:19:12.197 valid_lft forever preferred_lft forever 00:19:12.197 17:22:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:12.197 17:22:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:12.197 17:22:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:12.197 17:22:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:12.197 17:22:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:12.197 17:22:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:12.197 17:22:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:12.197 17:22:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:12.197 17:22:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:12.198 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:12.198 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:12.198 altname enp217s0f1np1 00:19:12.198 altname ens818f1np1 00:19:12.198 inet 192.168.100.9/24 scope global mlx_0_1 00:19:12.198 valid_lft forever preferred_lft forever 00:19:12.198 17:22:08 -- nvmf/common.sh@410 -- # return 0 00:19:12.198 17:22:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:12.198 17:22:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:12.198 17:22:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:12.198 17:22:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:12.198 17:22:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:12.198 17:22:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:12.198 17:22:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:12.198 17:22:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:12.198 17:22:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:12.198 17:22:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:12.198 17:22:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:12.198 17:22:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.198 17:22:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:12.198 17:22:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:12.198 17:22:08 -- nvmf/common.sh@104 -- # continue 2 00:19:12.198 17:22:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:12.198 17:22:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.198 17:22:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:12.198 17:22:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.198 17:22:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:12.198 17:22:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:12.198 17:22:08 -- nvmf/common.sh@104 -- # continue 2 00:19:12.198 17:22:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:12.198 17:22:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:12.198 17:22:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:12.198 17:22:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:12.198 17:22:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:12.198 17:22:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:12.198 17:22:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:12.198 17:22:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:12.198 17:22:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:12.198 17:22:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:12.198 17:22:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:12.198 17:22:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:12.198 17:22:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:12.198 192.168.100.9' 00:19:12.198 17:22:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:12.198 192.168.100.9' 00:19:12.198 17:22:08 -- nvmf/common.sh@445 -- # head -n 1 00:19:12.198 17:22:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:12.198 17:22:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:12.198 192.168.100.9' 00:19:12.198 17:22:08 -- nvmf/common.sh@446 -- # tail -n +2 00:19:12.198 17:22:08 -- nvmf/common.sh@446 -- # head -n 1 00:19:12.198 17:22:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:12.198 17:22:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:12.198 17:22:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:12.198 17:22:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:12.198 17:22:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:12.198 17:22:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:12.198 17:22:08 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:12.198 17:22:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:12.198 17:22:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:12.198 17:22:08 -- common/autotest_common.sh@10 -- # set +x 00:19:12.198 17:22:08 -- nvmf/common.sh@469 -- # nvmfpid=1362941 00:19:12.198 17:22:08 -- nvmf/common.sh@470 -- # waitforlisten 1362941 00:19:12.198 17:22:08 -- common/autotest_common.sh@829 -- # '[' -z 1362941 ']' 00:19:12.198 17:22:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.198 17:22:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.198 17:22:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.198 17:22:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.198 17:22:08 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:12.198 17:22:08 -- common/autotest_common.sh@10 -- # set +x 00:19:12.198 [2024-12-14 17:22:08.692303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:12.198 [2024-12-14 17:22:08.692357] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.198 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.198 [2024-12-14 17:22:08.762503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.198 [2024-12-14 17:22:08.800595] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:12.198 [2024-12-14 17:22:08.800723] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.198 [2024-12-14 17:22:08.800733] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.198 [2024-12-14 17:22:08.800742] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.198 [2024-12-14 17:22:08.800789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.198 [2024-12-14 17:22:08.800892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.198 [2024-12-14 17:22:08.800957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.198 [2024-12-14 17:22:08.800958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.198 17:22:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:12.198 17:22:08 -- common/autotest_common.sh@862 -- # return 0 00:19:12.198 17:22:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:12.198 17:22:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:12.198 17:22:08 -- common/autotest_common.sh@10 -- # set +x 00:19:12.458 17:22:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.458 17:22:08 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:12.458 17:22:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.458 17:22:08 -- common/autotest_common.sh@10 -- # set +x 00:19:12.458 17:22:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.458 17:22:08 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:12.458 17:22:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.458 17:22:08 -- common/autotest_common.sh@10 -- # set +x 00:19:12.458 17:22:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.458 17:22:08 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:12.458 17:22:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.458 17:22:08 -- common/autotest_common.sh@10 -- # set +x 00:19:12.458 [2024-12-14 17:22:08.994397] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2121070/0x2125540) succeed. 00:19:12.458 [2024-12-14 17:22:09.003207] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2122610/0x2166be0) succeed. 00:19:12.458 17:22:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.458 17:22:09 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:12.458 17:22:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.458 17:22:09 -- common/autotest_common.sh@10 -- # set +x 00:19:12.718 Malloc0 00:19:12.718 17:22:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:12.718 17:22:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.718 17:22:09 -- common/autotest_common.sh@10 -- # set +x 00:19:12.718 17:22:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:12.718 17:22:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.718 17:22:09 -- common/autotest_common.sh@10 -- # set +x 00:19:12.718 17:22:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:12.718 17:22:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.718 17:22:09 -- common/autotest_common.sh@10 -- # set +x 00:19:12.718 [2024-12-14 17:22:09.180002] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:12.718 17:22:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1363149 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@30 -- # READ_PID=1363151 00:19:12.718 17:22:09 -- nvmf/common.sh@520 -- # config=() 00:19:12.718 17:22:09 -- nvmf/common.sh@520 -- # local subsystem config 00:19:12.718 17:22:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:12.718 17:22:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:12.718 { 00:19:12.718 "params": { 00:19:12.718 "name": "Nvme$subsystem", 00:19:12.718 "trtype": "$TEST_TRANSPORT", 00:19:12.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.718 "adrfam": "ipv4", 00:19:12.718 "trsvcid": "$NVMF_PORT", 00:19:12.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.718 "hdgst": ${hdgst:-false}, 00:19:12.718 "ddgst": ${ddgst:-false} 00:19:12.718 }, 00:19:12.718 "method": "bdev_nvme_attach_controller" 00:19:12.718 } 00:19:12.718 EOF 00:19:12.718 )") 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1363153 00:19:12.718 17:22:09 -- nvmf/common.sh@520 -- # config=() 00:19:12.718 17:22:09 -- nvmf/common.sh@520 -- # local subsystem config 00:19:12.718 17:22:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:12.718 17:22:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:12.718 { 00:19:12.718 "params": { 00:19:12.718 "name": "Nvme$subsystem", 00:19:12.718 "trtype": "$TEST_TRANSPORT", 00:19:12.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.718 "adrfam": "ipv4", 00:19:12.718 "trsvcid": "$NVMF_PORT", 00:19:12.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.718 "hdgst": ${hdgst:-false}, 00:19:12.718 "ddgst": ${ddgst:-false} 00:19:12.718 }, 00:19:12.718 "method": "bdev_nvme_attach_controller" 00:19:12.718 } 00:19:12.718 EOF 00:19:12.718 )") 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1363156 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:12.718 17:22:09 -- nvmf/common.sh@542 -- # cat 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@35 -- # sync 00:19:12.718 17:22:09 -- nvmf/common.sh@520 -- # config=() 00:19:12.718 17:22:09 -- nvmf/common.sh@520 -- # local subsystem config 00:19:12.718 17:22:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:12.718 17:22:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:12.718 { 00:19:12.718 "params": { 00:19:12.718 "name": "Nvme$subsystem", 00:19:12.718 "trtype": "$TEST_TRANSPORT", 00:19:12.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.718 "adrfam": "ipv4", 00:19:12.718 "trsvcid": "$NVMF_PORT", 00:19:12.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.718 "hdgst": ${hdgst:-false}, 00:19:12.718 "ddgst": ${ddgst:-false} 00:19:12.718 }, 00:19:12.718 "method": "bdev_nvme_attach_controller" 00:19:12.718 } 00:19:12.718 EOF 00:19:12.718 )") 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:12.718 17:22:09 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:12.718 17:22:09 -- nvmf/common.sh@542 -- # cat 00:19:12.718 17:22:09 -- nvmf/common.sh@520 -- # config=() 00:19:12.718 17:22:09 -- nvmf/common.sh@520 -- # local subsystem config 00:19:12.718 17:22:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:12.718 17:22:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:12.718 { 00:19:12.718 "params": { 00:19:12.718 "name": "Nvme$subsystem", 00:19:12.718 "trtype": "$TEST_TRANSPORT", 00:19:12.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.718 "adrfam": "ipv4", 00:19:12.718 "trsvcid": "$NVMF_PORT", 00:19:12.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.719 "hdgst": ${hdgst:-false}, 00:19:12.719 "ddgst": ${ddgst:-false} 00:19:12.719 }, 00:19:12.719 "method": "bdev_nvme_attach_controller" 00:19:12.719 } 00:19:12.719 EOF 00:19:12.719 )") 00:19:12.719 17:22:09 -- nvmf/common.sh@542 -- # cat 00:19:12.719 17:22:09 -- target/bdev_io_wait.sh@37 -- # wait 1363149 00:19:12.719 17:22:09 -- nvmf/common.sh@542 -- # cat 00:19:12.719 17:22:09 -- nvmf/common.sh@544 -- # jq . 00:19:12.719 17:22:09 -- nvmf/common.sh@544 -- # jq . 00:19:12.719 17:22:09 -- nvmf/common.sh@545 -- # IFS=, 00:19:12.719 17:22:09 -- nvmf/common.sh@544 -- # jq . 00:19:12.719 17:22:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:12.719 "params": { 00:19:12.719 "name": "Nvme1", 00:19:12.719 "trtype": "rdma", 00:19:12.719 "traddr": "192.168.100.8", 00:19:12.719 "adrfam": "ipv4", 00:19:12.719 "trsvcid": "4420", 00:19:12.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.719 "hdgst": false, 00:19:12.719 "ddgst": false 00:19:12.719 }, 00:19:12.719 "method": "bdev_nvme_attach_controller" 00:19:12.719 }' 00:19:12.719 17:22:09 -- nvmf/common.sh@544 -- # jq . 00:19:12.719 17:22:09 -- nvmf/common.sh@545 -- # IFS=, 00:19:12.719 17:22:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:12.719 "params": { 00:19:12.719 "name": "Nvme1", 00:19:12.719 "trtype": "rdma", 00:19:12.719 "traddr": "192.168.100.8", 00:19:12.719 "adrfam": "ipv4", 00:19:12.719 "trsvcid": "4420", 00:19:12.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.719 "hdgst": false, 00:19:12.719 "ddgst": false 00:19:12.719 }, 00:19:12.719 "method": "bdev_nvme_attach_controller" 00:19:12.719 }' 00:19:12.719 17:22:09 -- nvmf/common.sh@545 -- # IFS=, 00:19:12.719 17:22:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:12.719 "params": { 00:19:12.719 "name": "Nvme1", 00:19:12.719 "trtype": "rdma", 00:19:12.719 "traddr": "192.168.100.8", 00:19:12.719 "adrfam": "ipv4", 00:19:12.719 "trsvcid": "4420", 00:19:12.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.719 "hdgst": false, 00:19:12.719 "ddgst": false 00:19:12.719 }, 00:19:12.719 "method": "bdev_nvme_attach_controller" 00:19:12.719 }' 00:19:12.719 17:22:09 -- nvmf/common.sh@545 -- # IFS=, 00:19:12.719 17:22:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:12.719 "params": { 00:19:12.719 "name": "Nvme1", 00:19:12.719 "trtype": "rdma", 00:19:12.719 "traddr": "192.168.100.8", 00:19:12.719 "adrfam": "ipv4", 00:19:12.719 "trsvcid": "4420", 00:19:12.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.719 "hdgst": false, 00:19:12.719 "ddgst": false 00:19:12.719 }, 00:19:12.719 "method": "bdev_nvme_attach_controller" 00:19:12.719 }' 00:19:12.719 [2024-12-14 17:22:09.227069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:12.719 [2024-12-14 17:22:09.227070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:12.719 [2024-12-14 17:22:09.227124] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-14 17:22:09.227125] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:12.719 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:12.719 [2024-12-14 17:22:09.230808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:12.719 [2024-12-14 17:22:09.230852] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:12.719 [2024-12-14 17:22:09.232723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:12.719 [2024-12-14 17:22:09.232771] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:12.719 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.719 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.978 [2024-12-14 17:22:09.411328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.978 [2024-12-14 17:22:09.434841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:12.978 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.978 [2024-12-14 17:22:09.510431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.978 [2024-12-14 17:22:09.534113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:12.978 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.978 [2024-12-14 17:22:09.604981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.978 [2024-12-14 17:22:09.634047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:13.236 [2024-12-14 17:22:09.664923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.236 [2024-12-14 17:22:09.688837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:13.236 Running I/O for 1 seconds... 00:19:13.236 Running I/O for 1 seconds... 00:19:13.236 Running I/O for 1 seconds... 00:19:13.236 Running I/O for 1 seconds... 00:19:14.170 00:19:14.170 Latency(us) 00:19:14.170 [2024-12-14T16:22:10.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.170 [2024-12-14T16:22:10.854Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:14.170 Nvme1n1 : 1.00 20659.19 80.70 0.00 0.00 6179.95 3381.66 14470.35 00:19:14.170 [2024-12-14T16:22:10.854Z] =================================================================================================================== 00:19:14.170 [2024-12-14T16:22:10.854Z] Total : 20659.19 80.70 0.00 0.00 6179.95 3381.66 14470.35 00:19:14.170 00:19:14.170 Latency(us) 00:19:14.170 [2024-12-14T16:22:10.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.170 [2024-12-14T16:22:10.854Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:14.170 Nvme1n1 : 1.01 15678.47 61.24 0.00 0.00 8138.85 5321.52 17511.22 00:19:14.170 [2024-12-14T16:22:10.854Z] =================================================================================================================== 00:19:14.170 [2024-12-14T16:22:10.854Z] Total : 15678.47 61.24 0.00 0.00 8138.85 5321.52 17511.22 00:19:14.170 00:19:14.170 Latency(us) 00:19:14.170 [2024-12-14T16:22:10.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.170 [2024-12-14T16:22:10.854Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:14.170 Nvme1n1 : 1.00 265450.95 1036.92 0.00 0.00 480.33 190.87 1782.58 00:19:14.170 [2024-12-14T16:22:10.854Z] =================================================================================================================== 00:19:14.170 [2024-12-14T16:22:10.854Z] Total : 265450.95 1036.92 0.00 0.00 480.33 190.87 1782.58 00:19:14.170 00:19:14.170 Latency(us) 00:19:14.170 [2024-12-14T16:22:10.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.170 [2024-12-14T16:22:10.854Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:14.170 Nvme1n1 : 1.00 15486.02 60.49 0.00 0.00 8247.14 3591.37 19293.80 00:19:14.170 [2024-12-14T16:22:10.854Z] =================================================================================================================== 00:19:14.170 [2024-12-14T16:22:10.854Z] Total : 15486.02 60.49 0.00 0.00 8247.14 3591.37 19293.80 00:19:14.738 17:22:11 -- target/bdev_io_wait.sh@38 -- # wait 1363151 00:19:14.738 17:22:11 -- target/bdev_io_wait.sh@39 -- # wait 1363153 00:19:14.738 17:22:11 -- target/bdev_io_wait.sh@40 -- # wait 1363156 00:19:14.738 17:22:11 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.738 17:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.738 17:22:11 -- common/autotest_common.sh@10 -- # set +x 00:19:14.738 17:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.738 17:22:11 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:14.738 17:22:11 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:14.738 17:22:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:14.738 17:22:11 -- nvmf/common.sh@116 -- # sync 00:19:14.738 17:22:11 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:14.738 17:22:11 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:14.738 17:22:11 -- nvmf/common.sh@119 -- # set +e 00:19:14.738 17:22:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:14.738 17:22:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:14.738 rmmod nvme_rdma 00:19:14.738 rmmod nvme_fabrics 00:19:14.738 17:22:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:14.738 17:22:11 -- nvmf/common.sh@123 -- # set -e 00:19:14.738 17:22:11 -- nvmf/common.sh@124 -- # return 0 00:19:14.738 17:22:11 -- nvmf/common.sh@477 -- # '[' -n 1362941 ']' 00:19:14.738 17:22:11 -- nvmf/common.sh@478 -- # killprocess 1362941 00:19:14.738 17:22:11 -- common/autotest_common.sh@936 -- # '[' -z 1362941 ']' 00:19:14.738 17:22:11 -- common/autotest_common.sh@940 -- # kill -0 1362941 00:19:14.738 17:22:11 -- common/autotest_common.sh@941 -- # uname 00:19:14.738 17:22:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:14.738 17:22:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1362941 00:19:14.738 17:22:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:14.738 17:22:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:14.739 17:22:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1362941' 00:19:14.739 killing process with pid 1362941 00:19:14.739 17:22:11 -- common/autotest_common.sh@955 -- # kill 1362941 00:19:14.739 17:22:11 -- common/autotest_common.sh@960 -- # wait 1362941 00:19:14.998 17:22:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:14.998 17:22:11 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:14.998 00:19:14.998 real 0m9.451s 00:19:14.998 user 0m17.984s 00:19:14.998 sys 0m6.244s 00:19:14.998 17:22:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:14.998 17:22:11 -- common/autotest_common.sh@10 -- # set +x 00:19:14.998 ************************************ 00:19:14.998 END TEST nvmf_bdev_io_wait 00:19:14.998 ************************************ 00:19:14.998 17:22:11 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:19:14.998 17:22:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:14.998 17:22:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:14.998 17:22:11 -- common/autotest_common.sh@10 -- # set +x 00:19:14.998 ************************************ 00:19:14.998 START TEST nvmf_queue_depth 00:19:14.998 ************************************ 00:19:14.998 17:22:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:19:15.259 * Looking for test storage... 00:19:15.259 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:15.259 17:22:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:15.259 17:22:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:15.259 17:22:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:15.259 17:22:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:15.259 17:22:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:15.259 17:22:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:15.259 17:22:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:15.259 17:22:11 -- scripts/common.sh@335 -- # IFS=.-: 00:19:15.259 17:22:11 -- scripts/common.sh@335 -- # read -ra ver1 00:19:15.259 17:22:11 -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.259 17:22:11 -- scripts/common.sh@336 -- # read -ra ver2 00:19:15.259 17:22:11 -- scripts/common.sh@337 -- # local 'op=<' 00:19:15.259 17:22:11 -- scripts/common.sh@339 -- # ver1_l=2 00:19:15.259 17:22:11 -- scripts/common.sh@340 -- # ver2_l=1 00:19:15.259 17:22:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:15.259 17:22:11 -- scripts/common.sh@343 -- # case "$op" in 00:19:15.259 17:22:11 -- scripts/common.sh@344 -- # : 1 00:19:15.259 17:22:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:15.259 17:22:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.259 17:22:11 -- scripts/common.sh@364 -- # decimal 1 00:19:15.259 17:22:11 -- scripts/common.sh@352 -- # local d=1 00:19:15.259 17:22:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.259 17:22:11 -- scripts/common.sh@354 -- # echo 1 00:19:15.259 17:22:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:15.259 17:22:11 -- scripts/common.sh@365 -- # decimal 2 00:19:15.259 17:22:11 -- scripts/common.sh@352 -- # local d=2 00:19:15.259 17:22:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.259 17:22:11 -- scripts/common.sh@354 -- # echo 2 00:19:15.259 17:22:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:15.259 17:22:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:15.259 17:22:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:15.259 17:22:11 -- scripts/common.sh@367 -- # return 0 00:19:15.259 17:22:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.259 17:22:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:15.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.259 --rc genhtml_branch_coverage=1 00:19:15.259 --rc genhtml_function_coverage=1 00:19:15.259 --rc genhtml_legend=1 00:19:15.259 --rc geninfo_all_blocks=1 00:19:15.259 --rc geninfo_unexecuted_blocks=1 00:19:15.259 00:19:15.259 ' 00:19:15.259 17:22:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:15.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.259 --rc genhtml_branch_coverage=1 00:19:15.259 --rc genhtml_function_coverage=1 00:19:15.259 --rc genhtml_legend=1 00:19:15.259 --rc geninfo_all_blocks=1 00:19:15.259 --rc geninfo_unexecuted_blocks=1 00:19:15.259 00:19:15.259 ' 00:19:15.259 17:22:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:15.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.259 --rc genhtml_branch_coverage=1 00:19:15.259 --rc genhtml_function_coverage=1 00:19:15.259 --rc genhtml_legend=1 00:19:15.259 --rc geninfo_all_blocks=1 00:19:15.259 --rc geninfo_unexecuted_blocks=1 00:19:15.259 00:19:15.259 ' 00:19:15.259 17:22:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:15.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.259 --rc genhtml_branch_coverage=1 00:19:15.259 --rc genhtml_function_coverage=1 00:19:15.259 --rc genhtml_legend=1 00:19:15.259 --rc geninfo_all_blocks=1 00:19:15.259 --rc geninfo_unexecuted_blocks=1 00:19:15.259 00:19:15.259 ' 00:19:15.259 17:22:11 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.259 17:22:11 -- nvmf/common.sh@7 -- # uname -s 00:19:15.259 17:22:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.259 17:22:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.259 17:22:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.259 17:22:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.259 17:22:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.259 17:22:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.259 17:22:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.259 17:22:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.259 17:22:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.259 17:22:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.259 17:22:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:15.259 17:22:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:15.259 17:22:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.259 17:22:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.259 17:22:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.259 17:22:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:15.259 17:22:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.259 17:22:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.259 17:22:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.259 17:22:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.259 17:22:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.259 17:22:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.259 17:22:11 -- paths/export.sh@5 -- # export PATH 00:19:15.259 17:22:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.259 17:22:11 -- nvmf/common.sh@46 -- # : 0 00:19:15.259 17:22:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:15.259 17:22:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:15.259 17:22:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:15.259 17:22:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.259 17:22:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.259 17:22:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:15.259 17:22:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:15.259 17:22:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:15.259 17:22:11 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:15.259 17:22:11 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:15.259 17:22:11 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.259 17:22:11 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:15.259 17:22:11 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:15.259 17:22:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.259 17:22:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:15.259 17:22:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:15.259 17:22:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:15.259 17:22:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.259 17:22:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:15.259 17:22:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.259 17:22:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:15.259 17:22:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:15.259 17:22:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:15.259 17:22:11 -- common/autotest_common.sh@10 -- # set +x 00:19:21.948 17:22:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:21.948 17:22:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:21.948 17:22:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:21.948 17:22:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:21.948 17:22:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:21.948 17:22:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:21.948 17:22:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:21.948 17:22:18 -- nvmf/common.sh@294 -- # net_devs=() 00:19:21.948 17:22:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:21.948 17:22:18 -- nvmf/common.sh@295 -- # e810=() 00:19:21.948 17:22:18 -- nvmf/common.sh@295 -- # local -ga e810 00:19:21.948 17:22:18 -- nvmf/common.sh@296 -- # x722=() 00:19:21.948 17:22:18 -- nvmf/common.sh@296 -- # local -ga x722 00:19:21.948 17:22:18 -- nvmf/common.sh@297 -- # mlx=() 00:19:21.948 17:22:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:21.948 17:22:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.948 17:22:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.948 17:22:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.948 17:22:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.948 17:22:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.948 17:22:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.948 17:22:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.948 17:22:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.948 17:22:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.948 17:22:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.948 17:22:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.948 17:22:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:21.948 17:22:18 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:21.948 17:22:18 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:21.948 17:22:18 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:21.948 17:22:18 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:21.948 17:22:18 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:21.948 17:22:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:21.948 17:22:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:21.948 17:22:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:21.948 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:21.948 17:22:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:21.948 17:22:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:21.948 17:22:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:21.948 17:22:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:21.948 17:22:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:21.948 17:22:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:21.948 17:22:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:21.948 17:22:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:21.948 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:21.949 17:22:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:21.949 17:22:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:21.949 17:22:18 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:21.949 17:22:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.949 17:22:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:21.949 17:22:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.949 17:22:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:21.949 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:21.949 17:22:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.949 17:22:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:21.949 17:22:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.949 17:22:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:21.949 17:22:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.949 17:22:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:21.949 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:21.949 17:22:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.949 17:22:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:21.949 17:22:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:21.949 17:22:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:21.949 17:22:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:21.949 17:22:18 -- nvmf/common.sh@57 -- # uname 00:19:21.949 17:22:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:21.949 17:22:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:21.949 17:22:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:21.949 17:22:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:21.949 17:22:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:21.949 17:22:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:21.949 17:22:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:21.949 17:22:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:21.949 17:22:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:21.949 17:22:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:21.949 17:22:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:21.949 17:22:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:21.949 17:22:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:21.949 17:22:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:21.949 17:22:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:21.949 17:22:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:21.949 17:22:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:21.949 17:22:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:21.949 17:22:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:21.949 17:22:18 -- nvmf/common.sh@104 -- # continue 2 00:19:21.949 17:22:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:21.949 17:22:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:21.949 17:22:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:21.949 17:22:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:21.949 17:22:18 -- nvmf/common.sh@104 -- # continue 2 00:19:21.949 17:22:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:21.949 17:22:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:21.949 17:22:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:21.949 17:22:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:21.949 17:22:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:21.949 17:22:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:21.949 17:22:18 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:21.949 17:22:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:21.949 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:21.949 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:21.949 altname enp217s0f0np0 00:19:21.949 altname ens818f0np0 00:19:21.949 inet 192.168.100.8/24 scope global mlx_0_0 00:19:21.949 valid_lft forever preferred_lft forever 00:19:21.949 17:22:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:21.949 17:22:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:21.949 17:22:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:21.949 17:22:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:21.949 17:22:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:21.949 17:22:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:21.949 17:22:18 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:21.949 17:22:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:21.949 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:21.949 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:21.949 altname enp217s0f1np1 00:19:21.949 altname ens818f1np1 00:19:21.949 inet 192.168.100.9/24 scope global mlx_0_1 00:19:21.949 valid_lft forever preferred_lft forever 00:19:21.949 17:22:18 -- nvmf/common.sh@410 -- # return 0 00:19:21.949 17:22:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:21.949 17:22:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:21.949 17:22:18 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:21.949 17:22:18 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:21.949 17:22:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:21.949 17:22:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:21.949 17:22:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:21.949 17:22:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:21.949 17:22:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:21.949 17:22:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:21.949 17:22:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:21.949 17:22:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:21.949 17:22:18 -- nvmf/common.sh@104 -- # continue 2 00:19:21.949 17:22:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:21.949 17:22:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:21.949 17:22:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:21.949 17:22:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:21.949 17:22:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:21.949 17:22:18 -- nvmf/common.sh@104 -- # continue 2 00:19:21.949 17:22:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:21.949 17:22:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:21.949 17:22:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:21.949 17:22:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:21.949 17:22:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:21.949 17:22:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:21.949 17:22:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:21.949 17:22:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:21.949 17:22:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:21.949 17:22:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:21.949 17:22:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:21.949 17:22:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:21.949 17:22:18 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:21.949 192.168.100.9' 00:19:21.949 17:22:18 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:21.949 192.168.100.9' 00:19:21.949 17:22:18 -- nvmf/common.sh@445 -- # head -n 1 00:19:21.949 17:22:18 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:21.949 17:22:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:21.949 192.168.100.9' 00:19:21.949 17:22:18 -- nvmf/common.sh@446 -- # tail -n +2 00:19:21.949 17:22:18 -- nvmf/common.sh@446 -- # head -n 1 00:19:21.949 17:22:18 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:21.949 17:22:18 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:21.949 17:22:18 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:21.949 17:22:18 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:21.949 17:22:18 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:21.949 17:22:18 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:21.949 17:22:18 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:21.949 17:22:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:21.949 17:22:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:21.949 17:22:18 -- common/autotest_common.sh@10 -- # set +x 00:19:21.949 17:22:18 -- nvmf/common.sh@469 -- # nvmfpid=1366837 00:19:21.949 17:22:18 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:21.949 17:22:18 -- nvmf/common.sh@470 -- # waitforlisten 1366837 00:19:21.949 17:22:18 -- common/autotest_common.sh@829 -- # '[' -z 1366837 ']' 00:19:21.949 17:22:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.949 17:22:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.949 17:22:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.949 17:22:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.949 17:22:18 -- common/autotest_common.sh@10 -- # set +x 00:19:21.949 [2024-12-14 17:22:18.491238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:21.949 [2024-12-14 17:22:18.491294] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.949 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.949 [2024-12-14 17:22:18.562296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.949 [2024-12-14 17:22:18.600334] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:21.949 [2024-12-14 17:22:18.600466] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.950 [2024-12-14 17:22:18.600476] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.950 [2024-12-14 17:22:18.600485] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.950 [2024-12-14 17:22:18.600511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.890 17:22:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.890 17:22:19 -- common/autotest_common.sh@862 -- # return 0 00:19:22.890 17:22:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:22.890 17:22:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:22.890 17:22:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.890 17:22:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.890 17:22:19 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:22.890 17:22:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.890 17:22:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.890 [2024-12-14 17:22:19.373853] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x909550/0x90da00) succeed. 00:19:22.890 [2024-12-14 17:22:19.382606] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x90aa00/0x94f0a0) succeed. 00:19:22.890 17:22:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.890 17:22:19 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:22.890 17:22:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.890 17:22:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.890 Malloc0 00:19:22.890 17:22:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.890 17:22:19 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:22.890 17:22:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.890 17:22:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.890 17:22:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.890 17:22:19 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:22.890 17:22:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.890 17:22:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.890 17:22:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.890 17:22:19 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:22.890 17:22:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.890 17:22:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.890 [2024-12-14 17:22:19.462347] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:22.890 17:22:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.890 17:22:19 -- target/queue_depth.sh@30 -- # bdevperf_pid=1366930 00:19:22.890 17:22:19 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:22.890 17:22:19 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:22.890 17:22:19 -- target/queue_depth.sh@33 -- # waitforlisten 1366930 /var/tmp/bdevperf.sock 00:19:22.890 17:22:19 -- common/autotest_common.sh@829 -- # '[' -z 1366930 ']' 00:19:22.890 17:22:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.890 17:22:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.890 17:22:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.890 17:22:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.890 17:22:19 -- common/autotest_common.sh@10 -- # set +x 00:19:22.890 [2024-12-14 17:22:19.507547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:22.890 [2024-12-14 17:22:19.507594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366930 ] 00:19:22.890 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.150 [2024-12-14 17:22:19.577887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.150 [2024-12-14 17:22:19.613990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.719 17:22:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.719 17:22:20 -- common/autotest_common.sh@862 -- # return 0 00:19:23.719 17:22:20 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:23.719 17:22:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.719 17:22:20 -- common/autotest_common.sh@10 -- # set +x 00:19:23.979 NVMe0n1 00:19:23.979 17:22:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.979 17:22:20 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:23.979 Running I/O for 10 seconds... 00:19:33.970 00:19:33.970 Latency(us) 00:19:33.970 [2024-12-14T16:22:30.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.970 [2024-12-14T16:22:30.654Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:33.970 Verification LBA range: start 0x0 length 0x4000 00:19:33.970 NVMe0n1 : 10.03 29350.43 114.65 0.00 0.00 34808.47 8074.04 35861.30 00:19:33.970 [2024-12-14T16:22:30.654Z] =================================================================================================================== 00:19:33.970 [2024-12-14T16:22:30.654Z] Total : 29350.43 114.65 0.00 0.00 34808.47 8074.04 35861.30 00:19:33.970 0 00:19:33.970 17:22:30 -- target/queue_depth.sh@39 -- # killprocess 1366930 00:19:33.970 17:22:30 -- common/autotest_common.sh@936 -- # '[' -z 1366930 ']' 00:19:33.970 17:22:30 -- common/autotest_common.sh@940 -- # kill -0 1366930 00:19:33.970 17:22:30 -- common/autotest_common.sh@941 -- # uname 00:19:33.970 17:22:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:33.970 17:22:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1366930 00:19:33.970 17:22:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:33.970 17:22:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:33.970 17:22:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1366930' 00:19:33.970 killing process with pid 1366930 00:19:33.970 17:22:30 -- common/autotest_common.sh@955 -- # kill 1366930 00:19:33.970 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.970 00:19:33.970 Latency(us) 00:19:33.970 [2024-12-14T16:22:30.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.970 [2024-12-14T16:22:30.654Z] =================================================================================================================== 00:19:33.970 [2024-12-14T16:22:30.654Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:33.970 17:22:30 -- common/autotest_common.sh@960 -- # wait 1366930 00:19:34.230 17:22:30 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:34.230 17:22:30 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:34.230 17:22:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:34.230 17:22:30 -- nvmf/common.sh@116 -- # sync 00:19:34.230 17:22:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:34.230 17:22:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:34.230 17:22:30 -- nvmf/common.sh@119 -- # set +e 00:19:34.230 17:22:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:34.230 17:22:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:34.230 rmmod nvme_rdma 00:19:34.230 rmmod nvme_fabrics 00:19:34.230 17:22:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:34.230 17:22:30 -- nvmf/common.sh@123 -- # set -e 00:19:34.230 17:22:30 -- nvmf/common.sh@124 -- # return 0 00:19:34.230 17:22:30 -- nvmf/common.sh@477 -- # '[' -n 1366837 ']' 00:19:34.230 17:22:30 -- nvmf/common.sh@478 -- # killprocess 1366837 00:19:34.230 17:22:30 -- common/autotest_common.sh@936 -- # '[' -z 1366837 ']' 00:19:34.230 17:22:30 -- common/autotest_common.sh@940 -- # kill -0 1366837 00:19:34.230 17:22:30 -- common/autotest_common.sh@941 -- # uname 00:19:34.230 17:22:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:34.230 17:22:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1366837 00:19:34.490 17:22:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:34.490 17:22:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:34.490 17:22:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1366837' 00:19:34.490 killing process with pid 1366837 00:19:34.490 17:22:30 -- common/autotest_common.sh@955 -- # kill 1366837 00:19:34.490 17:22:30 -- common/autotest_common.sh@960 -- # wait 1366837 00:19:34.490 17:22:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:34.490 17:22:31 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:34.490 00:19:34.490 real 0m19.549s 00:19:34.490 user 0m26.288s 00:19:34.490 sys 0m5.719s 00:19:34.490 17:22:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:34.490 17:22:31 -- common/autotest_common.sh@10 -- # set +x 00:19:34.490 ************************************ 00:19:34.490 END TEST nvmf_queue_depth 00:19:34.490 ************************************ 00:19:34.750 17:22:31 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:34.750 17:22:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:34.750 17:22:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:34.750 17:22:31 -- common/autotest_common.sh@10 -- # set +x 00:19:34.750 ************************************ 00:19:34.750 START TEST nvmf_multipath 00:19:34.750 ************************************ 00:19:34.750 17:22:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:34.750 * Looking for test storage... 00:19:34.750 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:34.750 17:22:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:34.750 17:22:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:34.750 17:22:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:34.750 17:22:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:34.750 17:22:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:34.750 17:22:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:34.750 17:22:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:34.750 17:22:31 -- scripts/common.sh@335 -- # IFS=.-: 00:19:34.750 17:22:31 -- scripts/common.sh@335 -- # read -ra ver1 00:19:34.750 17:22:31 -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.750 17:22:31 -- scripts/common.sh@336 -- # read -ra ver2 00:19:34.750 17:22:31 -- scripts/common.sh@337 -- # local 'op=<' 00:19:34.750 17:22:31 -- scripts/common.sh@339 -- # ver1_l=2 00:19:34.750 17:22:31 -- scripts/common.sh@340 -- # ver2_l=1 00:19:34.750 17:22:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:34.750 17:22:31 -- scripts/common.sh@343 -- # case "$op" in 00:19:34.750 17:22:31 -- scripts/common.sh@344 -- # : 1 00:19:34.750 17:22:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:34.750 17:22:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.750 17:22:31 -- scripts/common.sh@364 -- # decimal 1 00:19:34.750 17:22:31 -- scripts/common.sh@352 -- # local d=1 00:19:34.750 17:22:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.750 17:22:31 -- scripts/common.sh@354 -- # echo 1 00:19:34.750 17:22:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:34.750 17:22:31 -- scripts/common.sh@365 -- # decimal 2 00:19:34.750 17:22:31 -- scripts/common.sh@352 -- # local d=2 00:19:34.750 17:22:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.750 17:22:31 -- scripts/common.sh@354 -- # echo 2 00:19:34.750 17:22:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:34.750 17:22:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:34.750 17:22:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:34.750 17:22:31 -- scripts/common.sh@367 -- # return 0 00:19:34.750 17:22:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.750 17:22:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:34.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.750 --rc genhtml_branch_coverage=1 00:19:34.750 --rc genhtml_function_coverage=1 00:19:34.750 --rc genhtml_legend=1 00:19:34.750 --rc geninfo_all_blocks=1 00:19:34.750 --rc geninfo_unexecuted_blocks=1 00:19:34.750 00:19:34.750 ' 00:19:34.750 17:22:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:34.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.750 --rc genhtml_branch_coverage=1 00:19:34.750 --rc genhtml_function_coverage=1 00:19:34.750 --rc genhtml_legend=1 00:19:34.750 --rc geninfo_all_blocks=1 00:19:34.750 --rc geninfo_unexecuted_blocks=1 00:19:34.750 00:19:34.750 ' 00:19:34.750 17:22:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:34.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.750 --rc genhtml_branch_coverage=1 00:19:34.750 --rc genhtml_function_coverage=1 00:19:34.750 --rc genhtml_legend=1 00:19:34.750 --rc geninfo_all_blocks=1 00:19:34.750 --rc geninfo_unexecuted_blocks=1 00:19:34.750 00:19:34.750 ' 00:19:34.750 17:22:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:34.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.751 --rc genhtml_branch_coverage=1 00:19:34.751 --rc genhtml_function_coverage=1 00:19:34.751 --rc genhtml_legend=1 00:19:34.751 --rc geninfo_all_blocks=1 00:19:34.751 --rc geninfo_unexecuted_blocks=1 00:19:34.751 00:19:34.751 ' 00:19:34.751 17:22:31 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.751 17:22:31 -- nvmf/common.sh@7 -- # uname -s 00:19:34.751 17:22:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.751 17:22:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.751 17:22:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.751 17:22:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.751 17:22:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.751 17:22:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.751 17:22:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.751 17:22:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.751 17:22:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.751 17:22:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.751 17:22:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:34.751 17:22:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:34.751 17:22:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.751 17:22:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.751 17:22:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.751 17:22:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:34.751 17:22:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.751 17:22:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.751 17:22:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.751 17:22:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.751 17:22:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.751 17:22:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.751 17:22:31 -- paths/export.sh@5 -- # export PATH 00:19:34.751 17:22:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.751 17:22:31 -- nvmf/common.sh@46 -- # : 0 00:19:34.751 17:22:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:34.751 17:22:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:34.751 17:22:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:34.751 17:22:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.751 17:22:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.751 17:22:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:34.751 17:22:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:34.751 17:22:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:34.751 17:22:31 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.011 17:22:31 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.011 17:22:31 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:35.011 17:22:31 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:35.011 17:22:31 -- target/multipath.sh@43 -- # nvmftestinit 00:19:35.011 17:22:31 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:35.011 17:22:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.011 17:22:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:35.011 17:22:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:35.011 17:22:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:35.011 17:22:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.011 17:22:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.011 17:22:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.011 17:22:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:35.011 17:22:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:35.011 17:22:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:35.011 17:22:31 -- common/autotest_common.sh@10 -- # set +x 00:19:41.592 17:22:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:41.592 17:22:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:41.592 17:22:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:41.592 17:22:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:41.592 17:22:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:41.592 17:22:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:41.592 17:22:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:41.592 17:22:37 -- nvmf/common.sh@294 -- # net_devs=() 00:19:41.592 17:22:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:41.592 17:22:37 -- nvmf/common.sh@295 -- # e810=() 00:19:41.592 17:22:37 -- nvmf/common.sh@295 -- # local -ga e810 00:19:41.592 17:22:37 -- nvmf/common.sh@296 -- # x722=() 00:19:41.592 17:22:37 -- nvmf/common.sh@296 -- # local -ga x722 00:19:41.592 17:22:37 -- nvmf/common.sh@297 -- # mlx=() 00:19:41.592 17:22:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:41.592 17:22:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.592 17:22:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.592 17:22:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.592 17:22:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.592 17:22:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.592 17:22:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.592 17:22:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.592 17:22:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.592 17:22:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.592 17:22:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.592 17:22:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.592 17:22:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:41.592 17:22:37 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:41.592 17:22:37 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:41.592 17:22:37 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:41.592 17:22:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:41.592 17:22:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:41.592 17:22:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:41.592 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:41.592 17:22:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:41.592 17:22:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:41.592 17:22:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:41.592 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:41.592 17:22:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:41.592 17:22:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:41.592 17:22:37 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:41.592 17:22:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:41.592 17:22:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.592 17:22:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:41.592 17:22:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.592 17:22:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:41.592 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:41.592 17:22:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.592 17:22:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:41.592 17:22:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.592 17:22:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:41.592 17:22:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.593 17:22:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:41.593 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:41.593 17:22:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.593 17:22:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:41.593 17:22:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:41.593 17:22:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:41.593 17:22:37 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:41.593 17:22:37 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:41.593 17:22:37 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:41.593 17:22:37 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:41.593 17:22:37 -- nvmf/common.sh@57 -- # uname 00:19:41.593 17:22:37 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:41.593 17:22:37 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:41.593 17:22:37 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:41.593 17:22:37 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:41.593 17:22:37 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:41.593 17:22:37 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:41.593 17:22:37 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:41.593 17:22:37 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:41.593 17:22:37 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:41.593 17:22:37 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:41.593 17:22:37 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:41.593 17:22:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:41.593 17:22:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:41.593 17:22:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:41.593 17:22:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:41.593 17:22:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:41.593 17:22:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:41.593 17:22:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.593 17:22:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:41.593 17:22:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:41.593 17:22:37 -- nvmf/common.sh@104 -- # continue 2 00:19:41.593 17:22:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:41.593 17:22:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.593 17:22:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:41.593 17:22:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.593 17:22:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:41.593 17:22:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:41.593 17:22:38 -- nvmf/common.sh@104 -- # continue 2 00:19:41.593 17:22:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:41.593 17:22:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:41.593 17:22:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:41.593 17:22:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:41.593 17:22:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:41.593 17:22:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:41.593 17:22:38 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:41.593 17:22:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:41.593 17:22:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:41.593 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:41.593 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:41.593 altname enp217s0f0np0 00:19:41.593 altname ens818f0np0 00:19:41.593 inet 192.168.100.8/24 scope global mlx_0_0 00:19:41.593 valid_lft forever preferred_lft forever 00:19:41.593 17:22:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:41.593 17:22:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:41.593 17:22:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:41.593 17:22:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:41.593 17:22:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:41.593 17:22:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:41.593 17:22:38 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:41.593 17:22:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:41.593 17:22:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:41.593 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:41.593 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:41.593 altname enp217s0f1np1 00:19:41.593 altname ens818f1np1 00:19:41.593 inet 192.168.100.9/24 scope global mlx_0_1 00:19:41.593 valid_lft forever preferred_lft forever 00:19:41.593 17:22:38 -- nvmf/common.sh@410 -- # return 0 00:19:41.593 17:22:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:41.593 17:22:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:41.593 17:22:38 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:41.593 17:22:38 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:41.593 17:22:38 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:41.593 17:22:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:41.593 17:22:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:41.593 17:22:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:41.593 17:22:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:41.593 17:22:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:41.593 17:22:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:41.593 17:22:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.593 17:22:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:41.593 17:22:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:41.593 17:22:38 -- nvmf/common.sh@104 -- # continue 2 00:19:41.593 17:22:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:41.593 17:22:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.593 17:22:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:41.593 17:22:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.593 17:22:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:41.593 17:22:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:41.593 17:22:38 -- nvmf/common.sh@104 -- # continue 2 00:19:41.593 17:22:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:41.593 17:22:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:41.593 17:22:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:41.593 17:22:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:41.593 17:22:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:41.593 17:22:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:41.593 17:22:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:41.593 17:22:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:41.593 17:22:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:41.593 17:22:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:41.593 17:22:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:41.593 17:22:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:41.593 17:22:38 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:41.593 192.168.100.9' 00:19:41.593 17:22:38 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:41.593 192.168.100.9' 00:19:41.593 17:22:38 -- nvmf/common.sh@445 -- # head -n 1 00:19:41.593 17:22:38 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:41.593 17:22:38 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:41.593 192.168.100.9' 00:19:41.593 17:22:38 -- nvmf/common.sh@446 -- # tail -n +2 00:19:41.593 17:22:38 -- nvmf/common.sh@446 -- # head -n 1 00:19:41.593 17:22:38 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:41.593 17:22:38 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:41.593 17:22:38 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:41.593 17:22:38 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:41.593 17:22:38 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:41.593 17:22:38 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:41.593 17:22:38 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:19:41.593 17:22:38 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:19:41.593 17:22:38 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:19:41.593 run this test only with TCP transport for now 00:19:41.593 17:22:38 -- target/multipath.sh@53 -- # nvmftestfini 00:19:41.593 17:22:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:41.593 17:22:38 -- nvmf/common.sh@116 -- # sync 00:19:41.593 17:22:38 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:41.593 17:22:38 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:41.593 17:22:38 -- nvmf/common.sh@119 -- # set +e 00:19:41.593 17:22:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:41.593 17:22:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:41.593 rmmod nvme_rdma 00:19:41.593 rmmod nvme_fabrics 00:19:41.593 17:22:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:41.593 17:22:38 -- nvmf/common.sh@123 -- # set -e 00:19:41.593 17:22:38 -- nvmf/common.sh@124 -- # return 0 00:19:41.593 17:22:38 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:41.593 17:22:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:41.593 17:22:38 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:41.593 17:22:38 -- target/multipath.sh@54 -- # exit 0 00:19:41.593 17:22:38 -- target/multipath.sh@1 -- # nvmftestfini 00:19:41.593 17:22:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:41.593 17:22:38 -- nvmf/common.sh@116 -- # sync 00:19:41.593 17:22:38 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:41.593 17:22:38 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:41.593 17:22:38 -- nvmf/common.sh@119 -- # set +e 00:19:41.593 17:22:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:41.593 17:22:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:41.593 17:22:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:41.593 17:22:38 -- nvmf/common.sh@123 -- # set -e 00:19:41.593 17:22:38 -- nvmf/common.sh@124 -- # return 0 00:19:41.593 17:22:38 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:41.593 17:22:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:41.593 17:22:38 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:41.593 00:19:41.593 real 0m7.006s 00:19:41.593 user 0m2.032s 00:19:41.593 sys 0m5.186s 00:19:41.593 17:22:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:41.593 17:22:38 -- common/autotest_common.sh@10 -- # set +x 00:19:41.593 ************************************ 00:19:41.593 END TEST nvmf_multipath 00:19:41.593 ************************************ 00:19:41.593 17:22:38 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:41.593 17:22:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:41.593 17:22:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:41.593 17:22:38 -- common/autotest_common.sh@10 -- # set +x 00:19:41.854 ************************************ 00:19:41.854 START TEST nvmf_zcopy 00:19:41.854 ************************************ 00:19:41.854 17:22:38 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:41.854 * Looking for test storage... 00:19:41.854 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:41.854 17:22:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:41.854 17:22:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:41.854 17:22:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:41.854 17:22:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:41.854 17:22:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:41.854 17:22:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:41.854 17:22:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:41.854 17:22:38 -- scripts/common.sh@335 -- # IFS=.-: 00:19:41.854 17:22:38 -- scripts/common.sh@335 -- # read -ra ver1 00:19:41.854 17:22:38 -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.854 17:22:38 -- scripts/common.sh@336 -- # read -ra ver2 00:19:41.854 17:22:38 -- scripts/common.sh@337 -- # local 'op=<' 00:19:41.854 17:22:38 -- scripts/common.sh@339 -- # ver1_l=2 00:19:41.854 17:22:38 -- scripts/common.sh@340 -- # ver2_l=1 00:19:41.854 17:22:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:41.854 17:22:38 -- scripts/common.sh@343 -- # case "$op" in 00:19:41.854 17:22:38 -- scripts/common.sh@344 -- # : 1 00:19:41.854 17:22:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:41.854 17:22:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.854 17:22:38 -- scripts/common.sh@364 -- # decimal 1 00:19:41.854 17:22:38 -- scripts/common.sh@352 -- # local d=1 00:19:41.854 17:22:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.854 17:22:38 -- scripts/common.sh@354 -- # echo 1 00:19:41.854 17:22:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:41.854 17:22:38 -- scripts/common.sh@365 -- # decimal 2 00:19:41.854 17:22:38 -- scripts/common.sh@352 -- # local d=2 00:19:41.854 17:22:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.854 17:22:38 -- scripts/common.sh@354 -- # echo 2 00:19:41.854 17:22:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:41.854 17:22:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:41.854 17:22:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:41.854 17:22:38 -- scripts/common.sh@367 -- # return 0 00:19:41.854 17:22:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.854 17:22:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:41.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.854 --rc genhtml_branch_coverage=1 00:19:41.854 --rc genhtml_function_coverage=1 00:19:41.854 --rc genhtml_legend=1 00:19:41.854 --rc geninfo_all_blocks=1 00:19:41.854 --rc geninfo_unexecuted_blocks=1 00:19:41.854 00:19:41.854 ' 00:19:41.854 17:22:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:41.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.854 --rc genhtml_branch_coverage=1 00:19:41.854 --rc genhtml_function_coverage=1 00:19:41.854 --rc genhtml_legend=1 00:19:41.854 --rc geninfo_all_blocks=1 00:19:41.854 --rc geninfo_unexecuted_blocks=1 00:19:41.854 00:19:41.854 ' 00:19:41.854 17:22:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:41.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.854 --rc genhtml_branch_coverage=1 00:19:41.854 --rc genhtml_function_coverage=1 00:19:41.854 --rc genhtml_legend=1 00:19:41.854 --rc geninfo_all_blocks=1 00:19:41.854 --rc geninfo_unexecuted_blocks=1 00:19:41.854 00:19:41.854 ' 00:19:41.854 17:22:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:41.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.854 --rc genhtml_branch_coverage=1 00:19:41.854 --rc genhtml_function_coverage=1 00:19:41.854 --rc genhtml_legend=1 00:19:41.854 --rc geninfo_all_blocks=1 00:19:41.854 --rc geninfo_unexecuted_blocks=1 00:19:41.854 00:19:41.854 ' 00:19:41.854 17:22:38 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.854 17:22:38 -- nvmf/common.sh@7 -- # uname -s 00:19:41.854 17:22:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.854 17:22:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.854 17:22:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.854 17:22:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.854 17:22:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.854 17:22:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.854 17:22:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.854 17:22:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.854 17:22:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.854 17:22:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.854 17:22:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:41.854 17:22:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:41.854 17:22:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.854 17:22:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.854 17:22:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.854 17:22:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:41.854 17:22:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.854 17:22:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.854 17:22:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.854 17:22:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.854 17:22:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.855 17:22:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.855 17:22:38 -- paths/export.sh@5 -- # export PATH 00:19:41.855 17:22:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.855 17:22:38 -- nvmf/common.sh@46 -- # : 0 00:19:41.855 17:22:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:41.855 17:22:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:41.855 17:22:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:41.855 17:22:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.855 17:22:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.855 17:22:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:41.855 17:22:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:41.855 17:22:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:41.855 17:22:38 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:41.855 17:22:38 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:41.855 17:22:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.855 17:22:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:41.855 17:22:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:41.855 17:22:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:41.855 17:22:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.855 17:22:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.855 17:22:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.855 17:22:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:41.855 17:22:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:41.855 17:22:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:41.855 17:22:38 -- common/autotest_common.sh@10 -- # set +x 00:19:48.435 17:22:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:48.435 17:22:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:48.435 17:22:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:48.435 17:22:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:48.435 17:22:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:48.435 17:22:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:48.435 17:22:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:48.435 17:22:44 -- nvmf/common.sh@294 -- # net_devs=() 00:19:48.435 17:22:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:48.435 17:22:44 -- nvmf/common.sh@295 -- # e810=() 00:19:48.435 17:22:44 -- nvmf/common.sh@295 -- # local -ga e810 00:19:48.435 17:22:44 -- nvmf/common.sh@296 -- # x722=() 00:19:48.435 17:22:44 -- nvmf/common.sh@296 -- # local -ga x722 00:19:48.435 17:22:44 -- nvmf/common.sh@297 -- # mlx=() 00:19:48.435 17:22:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:48.435 17:22:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.435 17:22:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.435 17:22:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.435 17:22:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.435 17:22:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.435 17:22:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.435 17:22:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.435 17:22:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.435 17:22:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.435 17:22:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.435 17:22:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.435 17:22:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:48.435 17:22:44 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:48.435 17:22:44 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:48.435 17:22:44 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:48.435 17:22:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:48.435 17:22:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:48.435 17:22:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:48.435 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:48.435 17:22:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:48.435 17:22:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:48.435 17:22:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:48.435 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:48.435 17:22:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:48.435 17:22:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:48.435 17:22:44 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:48.435 17:22:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.435 17:22:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:48.435 17:22:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.435 17:22:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:48.435 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:48.435 17:22:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.435 17:22:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:48.435 17:22:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.435 17:22:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:48.435 17:22:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.435 17:22:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:48.435 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:48.435 17:22:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.435 17:22:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:48.435 17:22:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:48.435 17:22:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:48.435 17:22:44 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:48.435 17:22:44 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:48.435 17:22:44 -- nvmf/common.sh@57 -- # uname 00:19:48.436 17:22:44 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:48.436 17:22:44 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:48.436 17:22:44 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:48.436 17:22:44 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:48.436 17:22:44 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:48.436 17:22:44 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:48.436 17:22:44 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:48.436 17:22:44 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:48.436 17:22:44 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:48.436 17:22:44 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:48.436 17:22:44 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:48.436 17:22:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:48.436 17:22:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:48.436 17:22:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:48.436 17:22:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:48.436 17:22:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:48.436 17:22:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:48.436 17:22:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.436 17:22:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:48.436 17:22:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:48.436 17:22:44 -- nvmf/common.sh@104 -- # continue 2 00:19:48.436 17:22:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:48.436 17:22:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.436 17:22:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:48.436 17:22:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.436 17:22:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:48.436 17:22:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:48.436 17:22:44 -- nvmf/common.sh@104 -- # continue 2 00:19:48.436 17:22:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:48.436 17:22:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:48.436 17:22:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:48.436 17:22:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:48.436 17:22:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:48.436 17:22:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:48.436 17:22:44 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:48.436 17:22:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:48.436 17:22:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:48.436 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:48.436 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:48.436 altname enp217s0f0np0 00:19:48.436 altname ens818f0np0 00:19:48.436 inet 192.168.100.8/24 scope global mlx_0_0 00:19:48.436 valid_lft forever preferred_lft forever 00:19:48.436 17:22:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:48.436 17:22:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:48.436 17:22:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:48.436 17:22:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:48.436 17:22:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:48.436 17:22:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:48.436 17:22:44 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:48.436 17:22:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:48.436 17:22:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:48.436 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:48.436 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:48.436 altname enp217s0f1np1 00:19:48.436 altname ens818f1np1 00:19:48.436 inet 192.168.100.9/24 scope global mlx_0_1 00:19:48.436 valid_lft forever preferred_lft forever 00:19:48.436 17:22:44 -- nvmf/common.sh@410 -- # return 0 00:19:48.436 17:22:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:48.436 17:22:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:48.436 17:22:44 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:48.436 17:22:44 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:48.436 17:22:44 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:48.436 17:22:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:48.436 17:22:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:48.436 17:22:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:48.436 17:22:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:48.436 17:22:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:48.436 17:22:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:48.436 17:22:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.436 17:22:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:48.436 17:22:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:48.436 17:22:44 -- nvmf/common.sh@104 -- # continue 2 00:19:48.436 17:22:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:48.436 17:22:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.436 17:22:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:48.436 17:22:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:48.436 17:22:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:48.436 17:22:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:48.436 17:22:44 -- nvmf/common.sh@104 -- # continue 2 00:19:48.436 17:22:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:48.436 17:22:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:48.436 17:22:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:48.436 17:22:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:48.436 17:22:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:48.436 17:22:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:48.436 17:22:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:48.436 17:22:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:48.436 17:22:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:48.436 17:22:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:48.436 17:22:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:48.436 17:22:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:48.436 17:22:44 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:48.436 192.168.100.9' 00:19:48.436 17:22:44 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:48.436 192.168.100.9' 00:19:48.436 17:22:44 -- nvmf/common.sh@445 -- # head -n 1 00:19:48.436 17:22:44 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:48.436 17:22:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:48.436 192.168.100.9' 00:19:48.436 17:22:44 -- nvmf/common.sh@446 -- # tail -n +2 00:19:48.436 17:22:44 -- nvmf/common.sh@446 -- # head -n 1 00:19:48.436 17:22:44 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:48.436 17:22:44 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:48.436 17:22:44 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:48.436 17:22:44 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:48.436 17:22:44 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:48.436 17:22:44 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:48.436 17:22:44 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:48.436 17:22:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:48.436 17:22:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.436 17:22:44 -- common/autotest_common.sh@10 -- # set +x 00:19:48.436 17:22:44 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:48.436 17:22:44 -- nvmf/common.sh@469 -- # nvmfpid=1375442 00:19:48.436 17:22:44 -- nvmf/common.sh@470 -- # waitforlisten 1375442 00:19:48.436 17:22:44 -- common/autotest_common.sh@829 -- # '[' -z 1375442 ']' 00:19:48.436 17:22:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.436 17:22:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.436 17:22:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.436 17:22:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.436 17:22:44 -- common/autotest_common.sh@10 -- # set +x 00:19:48.436 [2024-12-14 17:22:44.943286] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:48.436 [2024-12-14 17:22:44.943336] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.436 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.436 [2024-12-14 17:22:45.013822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.436 [2024-12-14 17:22:45.050442] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:48.436 [2024-12-14 17:22:45.050556] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.436 [2024-12-14 17:22:45.050567] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.436 [2024-12-14 17:22:45.050577] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.436 [2024-12-14 17:22:45.050596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.374 17:22:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.374 17:22:45 -- common/autotest_common.sh@862 -- # return 0 00:19:49.374 17:22:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:49.374 17:22:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.374 17:22:45 -- common/autotest_common.sh@10 -- # set +x 00:19:49.374 17:22:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.374 17:22:45 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:19:49.374 17:22:45 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:19:49.374 Unsupported transport: rdma 00:19:49.374 17:22:45 -- target/zcopy.sh@17 -- # exit 0 00:19:49.374 17:22:45 -- target/zcopy.sh@1 -- # process_shm --id 0 00:19:49.374 17:22:45 -- common/autotest_common.sh@806 -- # type=--id 00:19:49.374 17:22:45 -- common/autotest_common.sh@807 -- # id=0 00:19:49.374 17:22:45 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:49.374 17:22:45 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:49.374 17:22:45 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:49.374 17:22:45 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:49.374 17:22:45 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:49.374 17:22:45 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:49.374 nvmf_trace.0 00:19:49.374 17:22:45 -- common/autotest_common.sh@821 -- # return 0 00:19:49.374 17:22:45 -- target/zcopy.sh@1 -- # nvmftestfini 00:19:49.374 17:22:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:49.374 17:22:45 -- nvmf/common.sh@116 -- # sync 00:19:49.374 17:22:45 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:49.374 17:22:45 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:49.374 17:22:45 -- nvmf/common.sh@119 -- # set +e 00:19:49.374 17:22:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:49.374 17:22:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:49.374 rmmod nvme_rdma 00:19:49.374 rmmod nvme_fabrics 00:19:49.374 17:22:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:49.374 17:22:45 -- nvmf/common.sh@123 -- # set -e 00:19:49.374 17:22:45 -- nvmf/common.sh@124 -- # return 0 00:19:49.374 17:22:45 -- nvmf/common.sh@477 -- # '[' -n 1375442 ']' 00:19:49.374 17:22:45 -- nvmf/common.sh@478 -- # killprocess 1375442 00:19:49.374 17:22:45 -- common/autotest_common.sh@936 -- # '[' -z 1375442 ']' 00:19:49.374 17:22:45 -- common/autotest_common.sh@940 -- # kill -0 1375442 00:19:49.374 17:22:45 -- common/autotest_common.sh@941 -- # uname 00:19:49.374 17:22:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:49.374 17:22:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1375442 00:19:49.374 17:22:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:49.374 17:22:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:49.374 17:22:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1375442' 00:19:49.374 killing process with pid 1375442 00:19:49.374 17:22:45 -- common/autotest_common.sh@955 -- # kill 1375442 00:19:49.374 17:22:45 -- common/autotest_common.sh@960 -- # wait 1375442 00:19:49.635 17:22:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:49.635 17:22:46 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:49.635 00:19:49.635 real 0m7.839s 00:19:49.635 user 0m3.175s 00:19:49.635 sys 0m5.329s 00:19:49.635 17:22:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:49.635 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:19:49.635 ************************************ 00:19:49.635 END TEST nvmf_zcopy 00:19:49.635 ************************************ 00:19:49.635 17:22:46 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:49.635 17:22:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:49.635 17:22:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:49.635 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:19:49.635 ************************************ 00:19:49.635 START TEST nvmf_nmic 00:19:49.635 ************************************ 00:19:49.635 17:22:46 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:49.635 * Looking for test storage... 00:19:49.635 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:49.635 17:22:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:49.635 17:22:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:49.635 17:22:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:49.895 17:22:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:49.895 17:22:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:49.895 17:22:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:49.895 17:22:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:49.895 17:22:46 -- scripts/common.sh@335 -- # IFS=.-: 00:19:49.895 17:22:46 -- scripts/common.sh@335 -- # read -ra ver1 00:19:49.895 17:22:46 -- scripts/common.sh@336 -- # IFS=.-: 00:19:49.895 17:22:46 -- scripts/common.sh@336 -- # read -ra ver2 00:19:49.895 17:22:46 -- scripts/common.sh@337 -- # local 'op=<' 00:19:49.895 17:22:46 -- scripts/common.sh@339 -- # ver1_l=2 00:19:49.895 17:22:46 -- scripts/common.sh@340 -- # ver2_l=1 00:19:49.895 17:22:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:49.895 17:22:46 -- scripts/common.sh@343 -- # case "$op" in 00:19:49.895 17:22:46 -- scripts/common.sh@344 -- # : 1 00:19:49.895 17:22:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:49.895 17:22:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:49.895 17:22:46 -- scripts/common.sh@364 -- # decimal 1 00:19:49.895 17:22:46 -- scripts/common.sh@352 -- # local d=1 00:19:49.895 17:22:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:49.895 17:22:46 -- scripts/common.sh@354 -- # echo 1 00:19:49.895 17:22:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:49.895 17:22:46 -- scripts/common.sh@365 -- # decimal 2 00:19:49.895 17:22:46 -- scripts/common.sh@352 -- # local d=2 00:19:49.895 17:22:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:49.895 17:22:46 -- scripts/common.sh@354 -- # echo 2 00:19:49.895 17:22:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:49.895 17:22:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:49.895 17:22:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:49.895 17:22:46 -- scripts/common.sh@367 -- # return 0 00:19:49.895 17:22:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:49.895 17:22:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:49.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.895 --rc genhtml_branch_coverage=1 00:19:49.895 --rc genhtml_function_coverage=1 00:19:49.895 --rc genhtml_legend=1 00:19:49.895 --rc geninfo_all_blocks=1 00:19:49.895 --rc geninfo_unexecuted_blocks=1 00:19:49.895 00:19:49.895 ' 00:19:49.895 17:22:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:49.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.895 --rc genhtml_branch_coverage=1 00:19:49.895 --rc genhtml_function_coverage=1 00:19:49.895 --rc genhtml_legend=1 00:19:49.895 --rc geninfo_all_blocks=1 00:19:49.895 --rc geninfo_unexecuted_blocks=1 00:19:49.895 00:19:49.895 ' 00:19:49.895 17:22:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:49.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.895 --rc genhtml_branch_coverage=1 00:19:49.895 --rc genhtml_function_coverage=1 00:19:49.895 --rc genhtml_legend=1 00:19:49.895 --rc geninfo_all_blocks=1 00:19:49.895 --rc geninfo_unexecuted_blocks=1 00:19:49.895 00:19:49.895 ' 00:19:49.895 17:22:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:49.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.895 --rc genhtml_branch_coverage=1 00:19:49.895 --rc genhtml_function_coverage=1 00:19:49.895 --rc genhtml_legend=1 00:19:49.895 --rc geninfo_all_blocks=1 00:19:49.895 --rc geninfo_unexecuted_blocks=1 00:19:49.895 00:19:49.895 ' 00:19:49.895 17:22:46 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.895 17:22:46 -- nvmf/common.sh@7 -- # uname -s 00:19:49.895 17:22:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.895 17:22:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.895 17:22:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.895 17:22:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.896 17:22:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.896 17:22:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.896 17:22:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.896 17:22:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.896 17:22:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.896 17:22:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.896 17:22:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:49.896 17:22:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:49.896 17:22:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.896 17:22:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.896 17:22:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.896 17:22:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:49.896 17:22:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.896 17:22:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.896 17:22:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.896 17:22:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.896 17:22:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.896 17:22:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.896 17:22:46 -- paths/export.sh@5 -- # export PATH 00:19:49.896 17:22:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.896 17:22:46 -- nvmf/common.sh@46 -- # : 0 00:19:49.896 17:22:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:49.896 17:22:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:49.896 17:22:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:49.896 17:22:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.896 17:22:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.896 17:22:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:49.896 17:22:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:49.896 17:22:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:49.896 17:22:46 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:49.896 17:22:46 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:49.896 17:22:46 -- target/nmic.sh@14 -- # nvmftestinit 00:19:49.896 17:22:46 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:49.896 17:22:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.896 17:22:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:49.896 17:22:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:49.896 17:22:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:49.896 17:22:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.896 17:22:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.896 17:22:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.896 17:22:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:49.896 17:22:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:49.896 17:22:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:49.896 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:19:56.479 17:22:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:56.479 17:22:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:56.479 17:22:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:56.479 17:22:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:56.479 17:22:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:56.479 17:22:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:56.479 17:22:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:56.479 17:22:52 -- nvmf/common.sh@294 -- # net_devs=() 00:19:56.479 17:22:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:56.479 17:22:52 -- nvmf/common.sh@295 -- # e810=() 00:19:56.479 17:22:52 -- nvmf/common.sh@295 -- # local -ga e810 00:19:56.479 17:22:52 -- nvmf/common.sh@296 -- # x722=() 00:19:56.479 17:22:52 -- nvmf/common.sh@296 -- # local -ga x722 00:19:56.479 17:22:52 -- nvmf/common.sh@297 -- # mlx=() 00:19:56.479 17:22:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:56.479 17:22:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.479 17:22:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.479 17:22:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.479 17:22:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.479 17:22:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.479 17:22:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.479 17:22:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.479 17:22:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.479 17:22:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.479 17:22:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.479 17:22:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.479 17:22:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:56.479 17:22:52 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:56.479 17:22:52 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:56.479 17:22:52 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:56.479 17:22:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:56.479 17:22:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:56.479 17:22:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:56.479 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:56.479 17:22:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:56.479 17:22:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:56.479 17:22:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:56.479 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:56.479 17:22:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:56.479 17:22:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:56.479 17:22:52 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:56.479 17:22:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.479 17:22:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:56.479 17:22:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.479 17:22:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:56.479 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:56.479 17:22:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.479 17:22:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:56.479 17:22:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.479 17:22:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:56.479 17:22:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.479 17:22:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:56.479 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:56.479 17:22:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.479 17:22:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:56.479 17:22:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:56.479 17:22:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:56.479 17:22:52 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:56.479 17:22:52 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:56.479 17:22:52 -- nvmf/common.sh@57 -- # uname 00:19:56.479 17:22:52 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:56.480 17:22:52 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:56.480 17:22:52 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:56.480 17:22:52 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:56.480 17:22:52 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:56.480 17:22:52 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:56.480 17:22:52 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:56.480 17:22:52 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:56.480 17:22:52 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:56.480 17:22:52 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:56.480 17:22:52 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:56.480 17:22:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:56.480 17:22:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:56.480 17:22:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:56.480 17:22:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:56.480 17:22:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:56.480 17:22:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:56.480 17:22:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:56.480 17:22:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:56.480 17:22:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:56.480 17:22:52 -- nvmf/common.sh@104 -- # continue 2 00:19:56.480 17:22:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:56.480 17:22:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:56.480 17:22:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:56.480 17:22:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:56.480 17:22:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:56.480 17:22:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:56.480 17:22:52 -- nvmf/common.sh@104 -- # continue 2 00:19:56.480 17:22:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:56.480 17:22:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:56.480 17:22:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:56.480 17:22:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:56.480 17:22:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:56.480 17:22:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:56.480 17:22:52 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:56.480 17:22:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:56.480 17:22:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:56.480 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:56.480 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:56.480 altname enp217s0f0np0 00:19:56.480 altname ens818f0np0 00:19:56.480 inet 192.168.100.8/24 scope global mlx_0_0 00:19:56.480 valid_lft forever preferred_lft forever 00:19:56.480 17:22:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:56.480 17:22:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:56.480 17:22:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:56.480 17:22:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:56.480 17:22:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:56.480 17:22:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:56.480 17:22:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:56.480 17:22:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:56.480 17:22:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:56.480 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:56.480 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:56.480 altname enp217s0f1np1 00:19:56.480 altname ens818f1np1 00:19:56.480 inet 192.168.100.9/24 scope global mlx_0_1 00:19:56.480 valid_lft forever preferred_lft forever 00:19:56.480 17:22:53 -- nvmf/common.sh@410 -- # return 0 00:19:56.480 17:22:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:56.480 17:22:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:56.480 17:22:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:56.480 17:22:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:56.480 17:22:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:56.480 17:22:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:56.480 17:22:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:56.480 17:22:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:56.480 17:22:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:56.480 17:22:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:56.480 17:22:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:56.480 17:22:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:56.480 17:22:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:56.480 17:22:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:56.480 17:22:53 -- nvmf/common.sh@104 -- # continue 2 00:19:56.480 17:22:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:56.480 17:22:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:56.480 17:22:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:56.480 17:22:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:56.480 17:22:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:56.480 17:22:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:56.480 17:22:53 -- nvmf/common.sh@104 -- # continue 2 00:19:56.480 17:22:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:56.480 17:22:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:56.480 17:22:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:56.480 17:22:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:56.480 17:22:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:56.480 17:22:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:56.480 17:22:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:56.480 17:22:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:56.480 17:22:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:56.480 17:22:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:56.480 17:22:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:56.480 17:22:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:56.480 17:22:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:56.480 192.168.100.9' 00:19:56.480 17:22:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:56.480 192.168.100.9' 00:19:56.480 17:22:53 -- nvmf/common.sh@445 -- # head -n 1 00:19:56.480 17:22:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:56.480 17:22:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:56.480 192.168.100.9' 00:19:56.480 17:22:53 -- nvmf/common.sh@446 -- # tail -n +2 00:19:56.480 17:22:53 -- nvmf/common.sh@446 -- # head -n 1 00:19:56.480 17:22:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:56.480 17:22:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:56.480 17:22:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:56.480 17:22:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:56.480 17:22:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:56.480 17:22:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:56.480 17:22:53 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:56.480 17:22:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:56.480 17:22:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.480 17:22:53 -- common/autotest_common.sh@10 -- # set +x 00:19:56.480 17:22:53 -- nvmf/common.sh@469 -- # nvmfpid=1379056 00:19:56.480 17:22:53 -- nvmf/common.sh@470 -- # waitforlisten 1379056 00:19:56.480 17:22:53 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:56.480 17:22:53 -- common/autotest_common.sh@829 -- # '[' -z 1379056 ']' 00:19:56.480 17:22:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.480 17:22:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.480 17:22:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.480 17:22:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.480 17:22:53 -- common/autotest_common.sh@10 -- # set +x 00:19:56.741 [2024-12-14 17:22:53.173230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:56.741 [2024-12-14 17:22:53.173285] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.741 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.741 [2024-12-14 17:22:53.244772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.741 [2024-12-14 17:22:53.284550] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:56.741 [2024-12-14 17:22:53.284687] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.741 [2024-12-14 17:22:53.284697] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.741 [2024-12-14 17:22:53.284706] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.741 [2024-12-14 17:22:53.284753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.741 [2024-12-14 17:22:53.284852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.741 [2024-12-14 17:22:53.284937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.741 [2024-12-14 17:22:53.284938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.682 17:22:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.682 17:22:53 -- common/autotest_common.sh@862 -- # return 0 00:19:57.682 17:22:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:57.682 17:22:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.682 17:22:53 -- common/autotest_common.sh@10 -- # set +x 00:19:57.682 17:22:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.682 17:22:54 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:57.682 17:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.682 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:19:57.682 [2024-12-14 17:22:54.070137] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c000d0/0x1c045a0) succeed. 00:19:57.682 [2024-12-14 17:22:54.079352] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c01670/0x1c45c40) succeed. 00:19:57.682 17:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.682 17:22:54 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:57.682 17:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.682 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:19:57.682 Malloc0 00:19:57.682 17:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.682 17:22:54 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:57.682 17:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.682 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:19:57.682 17:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.682 17:22:54 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:57.682 17:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.682 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:19:57.682 17:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.682 17:22:54 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:57.682 17:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.682 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:19:57.682 [2024-12-14 17:22:54.243433] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:57.682 17:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.682 17:22:54 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:57.682 test case1: single bdev can't be used in multiple subsystems 00:19:57.682 17:22:54 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:57.682 17:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.682 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:19:57.682 17:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.682 17:22:54 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:57.682 17:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.682 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:19:57.682 17:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.682 17:22:54 -- target/nmic.sh@28 -- # nmic_status=0 00:19:57.682 17:22:54 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:57.682 17:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.682 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:19:57.682 [2024-12-14 17:22:54.267246] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:57.682 [2024-12-14 17:22:54.267266] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:57.682 [2024-12-14 17:22:54.267276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.682 request: 00:19:57.682 { 00:19:57.682 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:57.682 "namespace": { 00:19:57.682 "bdev_name": "Malloc0" 00:19:57.682 }, 00:19:57.682 "method": "nvmf_subsystem_add_ns", 00:19:57.682 "req_id": 1 00:19:57.682 } 00:19:57.682 Got JSON-RPC error response 00:19:57.682 response: 00:19:57.682 { 00:19:57.682 "code": -32602, 00:19:57.682 "message": "Invalid parameters" 00:19:57.682 } 00:19:57.682 17:22:54 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:57.682 17:22:54 -- target/nmic.sh@29 -- # nmic_status=1 00:19:57.682 17:22:54 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:57.682 17:22:54 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:57.682 Adding namespace failed - expected result. 00:19:57.682 17:22:54 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:57.682 test case2: host connect to nvmf target in multiple paths 00:19:57.682 17:22:54 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:57.682 17:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.682 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:19:57.682 [2024-12-14 17:22:54.279316] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:57.682 17:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.682 17:22:54 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:58.622 17:22:55 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:19:59.560 17:22:56 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:59.560 17:22:56 -- common/autotest_common.sh@1187 -- # local i=0 00:19:59.560 17:22:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:59.819 17:22:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:59.819 17:22:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:01.729 17:22:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:01.729 17:22:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:01.729 17:22:58 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:20:01.729 17:22:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:01.729 17:22:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:01.729 17:22:58 -- common/autotest_common.sh@1197 -- # return 0 00:20:01.729 17:22:58 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:01.729 [global] 00:20:01.729 thread=1 00:20:01.729 invalidate=1 00:20:01.729 rw=write 00:20:01.729 time_based=1 00:20:01.729 runtime=1 00:20:01.729 ioengine=libaio 00:20:01.729 direct=1 00:20:01.729 bs=4096 00:20:01.729 iodepth=1 00:20:01.729 norandommap=0 00:20:01.729 numjobs=1 00:20:01.729 00:20:01.729 verify_dump=1 00:20:01.729 verify_backlog=512 00:20:01.729 verify_state_save=0 00:20:01.729 do_verify=1 00:20:01.729 verify=crc32c-intel 00:20:01.729 [job0] 00:20:01.729 filename=/dev/nvme0n1 00:20:01.729 Could not set queue depth (nvme0n1) 00:20:01.989 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:01.989 fio-3.35 00:20:01.989 Starting 1 thread 00:20:03.372 00:20:03.372 job0: (groupid=0, jobs=1): err= 0: pid=1380154: Sat Dec 14 17:22:59 2024 00:20:03.372 read: IOPS=7107, BW=27.8MiB/s (29.1MB/s)(27.8MiB/1001msec) 00:20:03.372 slat (nsec): min=8261, max=33638, avg=8873.97, stdev=886.45 00:20:03.372 clat (usec): min=35, max=178, avg=58.38, stdev= 3.99 00:20:03.372 lat (usec): min=58, max=187, avg=67.25, stdev= 4.08 00:20:03.372 clat percentiles (usec): 00:20:03.372 | 1.00th=[ 52], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 56], 00:20:03.372 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:20:03.372 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 64], 95.00th=[ 65], 00:20:03.372 | 99.00th=[ 68], 99.50th=[ 71], 99.90th=[ 80], 99.95th=[ 88], 00:20:03.372 | 99.99th=[ 180] 00:20:03.372 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:20:03.372 slat (nsec): min=9892, max=43693, avg=11457.56, stdev=1001.88 00:20:03.372 clat (nsec): min=45074, max=80771, avg=56072.03, stdev=3683.90 00:20:03.372 lat (usec): min=57, max=124, avg=67.53, stdev= 3.83 00:20:03.372 clat percentiles (nsec): 00:20:03.372 | 1.00th=[48896], 5.00th=[50432], 10.00th=[51456], 20.00th=[52992], 00:20:03.372 | 30.00th=[54016], 40.00th=[55040], 50.00th=[55552], 60.00th=[56576], 00:20:03.372 | 70.00th=[58112], 80.00th=[59136], 90.00th=[61184], 95.00th=[62208], 00:20:03.372 | 99.00th=[65280], 99.50th=[66048], 99.90th=[73216], 99.95th=[77312], 00:20:03.372 | 99.99th=[80384] 00:20:03.372 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:20:03.372 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:20:03.372 lat (usec) : 50=1.79%, 100=98.20%, 250=0.01% 00:20:03.372 cpu : usr=11.90%, sys=18.10%, ctx=14284, majf=0, minf=1 00:20:03.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.372 issued rwts: total=7115,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.372 00:20:03.372 Run status group 0 (all jobs): 00:20:03.372 READ: bw=27.8MiB/s (29.1MB/s), 27.8MiB/s-27.8MiB/s (29.1MB/s-29.1MB/s), io=27.8MiB (29.1MB), run=1001-1001msec 00:20:03.372 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:20:03.372 00:20:03.372 Disk stats (read/write): 00:20:03.372 nvme0n1: ios=6232/6656, merge=0/0, ticks=314/303, in_queue=617, util=90.58% 00:20:03.372 17:22:59 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:05.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:05.283 17:23:01 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:05.283 17:23:01 -- common/autotest_common.sh@1208 -- # local i=0 00:20:05.283 17:23:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:05.283 17:23:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:05.283 17:23:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:05.283 17:23:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:05.283 17:23:01 -- common/autotest_common.sh@1220 -- # return 0 00:20:05.283 17:23:01 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:05.283 17:23:01 -- target/nmic.sh@53 -- # nvmftestfini 00:20:05.283 17:23:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:05.283 17:23:01 -- nvmf/common.sh@116 -- # sync 00:20:05.283 17:23:01 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:05.283 17:23:01 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:05.283 17:23:01 -- nvmf/common.sh@119 -- # set +e 00:20:05.283 17:23:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:05.283 17:23:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:05.283 rmmod nvme_rdma 00:20:05.283 rmmod nvme_fabrics 00:20:05.283 17:23:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:05.283 17:23:01 -- nvmf/common.sh@123 -- # set -e 00:20:05.283 17:23:01 -- nvmf/common.sh@124 -- # return 0 00:20:05.283 17:23:01 -- nvmf/common.sh@477 -- # '[' -n 1379056 ']' 00:20:05.283 17:23:01 -- nvmf/common.sh@478 -- # killprocess 1379056 00:20:05.283 17:23:01 -- common/autotest_common.sh@936 -- # '[' -z 1379056 ']' 00:20:05.283 17:23:01 -- common/autotest_common.sh@940 -- # kill -0 1379056 00:20:05.283 17:23:01 -- common/autotest_common.sh@941 -- # uname 00:20:05.283 17:23:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:05.283 17:23:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1379056 00:20:05.283 17:23:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:05.283 17:23:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:05.283 17:23:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1379056' 00:20:05.283 killing process with pid 1379056 00:20:05.283 17:23:01 -- common/autotest_common.sh@955 -- # kill 1379056 00:20:05.283 17:23:01 -- common/autotest_common.sh@960 -- # wait 1379056 00:20:05.544 17:23:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:05.544 17:23:02 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:05.544 00:20:05.544 real 0m15.976s 00:20:05.544 user 0m46.034s 00:20:05.544 sys 0m6.122s 00:20:05.544 17:23:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:05.544 17:23:02 -- common/autotest_common.sh@10 -- # set +x 00:20:05.544 ************************************ 00:20:05.544 END TEST nvmf_nmic 00:20:05.544 ************************************ 00:20:05.544 17:23:02 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:20:05.544 17:23:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:05.544 17:23:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:05.544 17:23:02 -- common/autotest_common.sh@10 -- # set +x 00:20:05.544 ************************************ 00:20:05.544 START TEST nvmf_fio_target 00:20:05.544 ************************************ 00:20:05.544 17:23:02 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:20:05.805 * Looking for test storage... 00:20:05.805 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:05.805 17:23:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:05.805 17:23:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:05.805 17:23:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:05.805 17:23:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:05.805 17:23:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:05.805 17:23:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:05.805 17:23:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:05.805 17:23:02 -- scripts/common.sh@335 -- # IFS=.-: 00:20:05.805 17:23:02 -- scripts/common.sh@335 -- # read -ra ver1 00:20:05.805 17:23:02 -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.805 17:23:02 -- scripts/common.sh@336 -- # read -ra ver2 00:20:05.805 17:23:02 -- scripts/common.sh@337 -- # local 'op=<' 00:20:05.805 17:23:02 -- scripts/common.sh@339 -- # ver1_l=2 00:20:05.805 17:23:02 -- scripts/common.sh@340 -- # ver2_l=1 00:20:05.805 17:23:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:05.805 17:23:02 -- scripts/common.sh@343 -- # case "$op" in 00:20:05.805 17:23:02 -- scripts/common.sh@344 -- # : 1 00:20:05.805 17:23:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:05.805 17:23:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.805 17:23:02 -- scripts/common.sh@364 -- # decimal 1 00:20:05.805 17:23:02 -- scripts/common.sh@352 -- # local d=1 00:20:05.805 17:23:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.805 17:23:02 -- scripts/common.sh@354 -- # echo 1 00:20:05.805 17:23:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:05.805 17:23:02 -- scripts/common.sh@365 -- # decimal 2 00:20:05.805 17:23:02 -- scripts/common.sh@352 -- # local d=2 00:20:05.805 17:23:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.805 17:23:02 -- scripts/common.sh@354 -- # echo 2 00:20:05.805 17:23:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:05.805 17:23:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:05.805 17:23:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:05.805 17:23:02 -- scripts/common.sh@367 -- # return 0 00:20:05.805 17:23:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.805 17:23:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:05.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.805 --rc genhtml_branch_coverage=1 00:20:05.805 --rc genhtml_function_coverage=1 00:20:05.805 --rc genhtml_legend=1 00:20:05.805 --rc geninfo_all_blocks=1 00:20:05.805 --rc geninfo_unexecuted_blocks=1 00:20:05.805 00:20:05.805 ' 00:20:05.805 17:23:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:05.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.805 --rc genhtml_branch_coverage=1 00:20:05.805 --rc genhtml_function_coverage=1 00:20:05.805 --rc genhtml_legend=1 00:20:05.805 --rc geninfo_all_blocks=1 00:20:05.805 --rc geninfo_unexecuted_blocks=1 00:20:05.805 00:20:05.805 ' 00:20:05.805 17:23:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:05.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.805 --rc genhtml_branch_coverage=1 00:20:05.805 --rc genhtml_function_coverage=1 00:20:05.805 --rc genhtml_legend=1 00:20:05.805 --rc geninfo_all_blocks=1 00:20:05.805 --rc geninfo_unexecuted_blocks=1 00:20:05.805 00:20:05.805 ' 00:20:05.805 17:23:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:05.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.805 --rc genhtml_branch_coverage=1 00:20:05.805 --rc genhtml_function_coverage=1 00:20:05.805 --rc genhtml_legend=1 00:20:05.805 --rc geninfo_all_blocks=1 00:20:05.805 --rc geninfo_unexecuted_blocks=1 00:20:05.805 00:20:05.805 ' 00:20:05.805 17:23:02 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.805 17:23:02 -- nvmf/common.sh@7 -- # uname -s 00:20:05.805 17:23:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.805 17:23:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.805 17:23:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.805 17:23:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.805 17:23:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.805 17:23:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.805 17:23:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.805 17:23:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.805 17:23:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.805 17:23:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.805 17:23:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:05.805 17:23:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:05.805 17:23:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.805 17:23:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.805 17:23:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.805 17:23:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:05.805 17:23:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.805 17:23:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.805 17:23:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.805 17:23:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.805 17:23:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.805 17:23:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.805 17:23:02 -- paths/export.sh@5 -- # export PATH 00:20:05.805 17:23:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.805 17:23:02 -- nvmf/common.sh@46 -- # : 0 00:20:05.805 17:23:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:05.805 17:23:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:05.805 17:23:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:05.805 17:23:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.805 17:23:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.805 17:23:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:05.805 17:23:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:05.805 17:23:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:05.805 17:23:02 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:05.806 17:23:02 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:05.806 17:23:02 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:05.806 17:23:02 -- target/fio.sh@16 -- # nvmftestinit 00:20:05.806 17:23:02 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:05.806 17:23:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.806 17:23:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:05.806 17:23:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:05.806 17:23:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:05.806 17:23:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.806 17:23:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.806 17:23:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.806 17:23:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:05.806 17:23:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:05.806 17:23:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:05.806 17:23:02 -- common/autotest_common.sh@10 -- # set +x 00:20:12.446 17:23:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:12.446 17:23:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:12.446 17:23:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:12.446 17:23:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:12.446 17:23:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:12.446 17:23:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:12.446 17:23:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:12.446 17:23:08 -- nvmf/common.sh@294 -- # net_devs=() 00:20:12.446 17:23:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:12.446 17:23:08 -- nvmf/common.sh@295 -- # e810=() 00:20:12.446 17:23:08 -- nvmf/common.sh@295 -- # local -ga e810 00:20:12.446 17:23:08 -- nvmf/common.sh@296 -- # x722=() 00:20:12.446 17:23:08 -- nvmf/common.sh@296 -- # local -ga x722 00:20:12.446 17:23:08 -- nvmf/common.sh@297 -- # mlx=() 00:20:12.446 17:23:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:12.446 17:23:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.446 17:23:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.446 17:23:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.446 17:23:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.446 17:23:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.446 17:23:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.446 17:23:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.446 17:23:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.446 17:23:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.446 17:23:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.446 17:23:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.446 17:23:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:12.446 17:23:08 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:12.446 17:23:08 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:12.446 17:23:08 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:12.446 17:23:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:12.446 17:23:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:12.446 17:23:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:12.446 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:12.446 17:23:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:12.446 17:23:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:12.446 17:23:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:12.446 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:12.446 17:23:08 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:12.446 17:23:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:12.446 17:23:08 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:12.446 17:23:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.446 17:23:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:12.446 17:23:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.446 17:23:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:12.446 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:12.446 17:23:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.446 17:23:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:12.446 17:23:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.446 17:23:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:12.446 17:23:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.446 17:23:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:12.446 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:12.446 17:23:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.446 17:23:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:12.446 17:23:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:12.446 17:23:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:12.446 17:23:08 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:12.446 17:23:08 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:12.446 17:23:08 -- nvmf/common.sh@57 -- # uname 00:20:12.446 17:23:08 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:12.446 17:23:08 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:12.446 17:23:08 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:12.446 17:23:08 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:12.446 17:23:08 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:12.447 17:23:08 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:12.447 17:23:08 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:12.447 17:23:08 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:12.447 17:23:08 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:12.447 17:23:08 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:12.447 17:23:08 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:12.447 17:23:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:12.447 17:23:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:12.447 17:23:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:12.447 17:23:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:12.447 17:23:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:12.447 17:23:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:12.447 17:23:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.447 17:23:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:12.447 17:23:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:12.447 17:23:08 -- nvmf/common.sh@104 -- # continue 2 00:20:12.447 17:23:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:12.447 17:23:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.447 17:23:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:12.447 17:23:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.447 17:23:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:12.447 17:23:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:12.447 17:23:08 -- nvmf/common.sh@104 -- # continue 2 00:20:12.447 17:23:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:12.447 17:23:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:12.447 17:23:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:12.447 17:23:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:12.447 17:23:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:12.447 17:23:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:12.447 17:23:08 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:12.447 17:23:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:12.447 17:23:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:12.447 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:12.447 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:12.447 altname enp217s0f0np0 00:20:12.447 altname ens818f0np0 00:20:12.447 inet 192.168.100.8/24 scope global mlx_0_0 00:20:12.447 valid_lft forever preferred_lft forever 00:20:12.447 17:23:08 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:12.447 17:23:08 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:12.447 17:23:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:12.447 17:23:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:12.447 17:23:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:12.447 17:23:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:12.447 17:23:08 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:12.447 17:23:08 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:12.447 17:23:08 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:12.447 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:12.447 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:12.447 altname enp217s0f1np1 00:20:12.447 altname ens818f1np1 00:20:12.447 inet 192.168.100.9/24 scope global mlx_0_1 00:20:12.447 valid_lft forever preferred_lft forever 00:20:12.447 17:23:08 -- nvmf/common.sh@410 -- # return 0 00:20:12.447 17:23:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:12.447 17:23:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:12.447 17:23:08 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:12.447 17:23:08 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:12.447 17:23:08 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:12.447 17:23:08 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:12.447 17:23:08 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:12.447 17:23:08 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:12.447 17:23:08 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:12.447 17:23:08 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:12.447 17:23:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:12.447 17:23:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.447 17:23:08 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:12.447 17:23:08 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:12.447 17:23:08 -- nvmf/common.sh@104 -- # continue 2 00:20:12.447 17:23:08 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:12.447 17:23:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.447 17:23:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:12.447 17:23:08 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.447 17:23:08 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:12.447 17:23:08 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:12.447 17:23:08 -- nvmf/common.sh@104 -- # continue 2 00:20:12.447 17:23:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:12.447 17:23:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:12.447 17:23:08 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:12.447 17:23:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:12.447 17:23:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:12.447 17:23:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:12.447 17:23:08 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:12.447 17:23:08 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:12.447 17:23:08 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:12.447 17:23:08 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:12.447 17:23:08 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:12.447 17:23:08 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:12.447 17:23:08 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:12.447 192.168.100.9' 00:20:12.447 17:23:08 -- nvmf/common.sh@445 -- # head -n 1 00:20:12.447 17:23:08 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:12.447 192.168.100.9' 00:20:12.447 17:23:08 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:12.447 17:23:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:12.447 192.168.100.9' 00:20:12.447 17:23:08 -- nvmf/common.sh@446 -- # tail -n +2 00:20:12.447 17:23:08 -- nvmf/common.sh@446 -- # head -n 1 00:20:12.447 17:23:08 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:12.447 17:23:08 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:12.447 17:23:08 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:12.447 17:23:08 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:12.447 17:23:08 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:12.447 17:23:08 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:12.447 17:23:08 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:12.447 17:23:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:12.447 17:23:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.447 17:23:08 -- common/autotest_common.sh@10 -- # set +x 00:20:12.447 17:23:08 -- nvmf/common.sh@469 -- # nvmfpid=1383934 00:20:12.447 17:23:08 -- nvmf/common.sh@470 -- # waitforlisten 1383934 00:20:12.447 17:23:08 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:12.447 17:23:08 -- common/autotest_common.sh@829 -- # '[' -z 1383934 ']' 00:20:12.447 17:23:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.447 17:23:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.447 17:23:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.447 17:23:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.447 17:23:08 -- common/autotest_common.sh@10 -- # set +x 00:20:12.447 [2024-12-14 17:23:08.983227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:12.447 [2024-12-14 17:23:08.983287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.447 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.447 [2024-12-14 17:23:09.055893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:12.447 [2024-12-14 17:23:09.094598] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:12.447 [2024-12-14 17:23:09.094727] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.447 [2024-12-14 17:23:09.094737] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.447 [2024-12-14 17:23:09.094746] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.447 [2024-12-14 17:23:09.094794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.447 [2024-12-14 17:23:09.094891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.447 [2024-12-14 17:23:09.094953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.447 [2024-12-14 17:23:09.094955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.385 17:23:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.385 17:23:09 -- common/autotest_common.sh@862 -- # return 0 00:20:13.385 17:23:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:13.385 17:23:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.385 17:23:09 -- common/autotest_common.sh@10 -- # set +x 00:20:13.385 17:23:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.385 17:23:09 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:13.385 [2024-12-14 17:23:10.033981] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x232c0d0/0x23305a0) succeed. 00:20:13.385 [2024-12-14 17:23:10.043196] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x232d670/0x2371c40) succeed. 00:20:13.645 17:23:10 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:13.904 17:23:10 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:13.904 17:23:10 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:14.163 17:23:10 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:14.163 17:23:10 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:14.163 17:23:10 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:14.163 17:23:10 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:14.423 17:23:11 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:14.423 17:23:11 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:14.680 17:23:11 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:14.939 17:23:11 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:14.939 17:23:11 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:14.939 17:23:11 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:14.939 17:23:11 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:15.199 17:23:11 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:15.199 17:23:11 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:15.458 17:23:11 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:15.717 17:23:12 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:15.717 17:23:12 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:15.717 17:23:12 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:15.717 17:23:12 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:15.976 17:23:12 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:16.235 [2024-12-14 17:23:12.722847] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:16.236 17:23:12 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:16.495 17:23:12 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:16.495 17:23:13 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:17.432 17:23:14 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:17.432 17:23:14 -- common/autotest_common.sh@1187 -- # local i=0 00:20:17.432 17:23:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:17.432 17:23:14 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:20:17.432 17:23:14 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:20:17.432 17:23:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:19.969 17:23:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:19.969 17:23:16 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:20:19.969 17:23:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:19.969 17:23:16 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:20:19.969 17:23:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:19.969 17:23:16 -- common/autotest_common.sh@1197 -- # return 0 00:20:19.969 17:23:16 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:19.969 [global] 00:20:19.969 thread=1 00:20:19.969 invalidate=1 00:20:19.969 rw=write 00:20:19.969 time_based=1 00:20:19.969 runtime=1 00:20:19.969 ioengine=libaio 00:20:19.969 direct=1 00:20:19.969 bs=4096 00:20:19.969 iodepth=1 00:20:19.969 norandommap=0 00:20:19.969 numjobs=1 00:20:19.969 00:20:19.969 verify_dump=1 00:20:19.969 verify_backlog=512 00:20:19.969 verify_state_save=0 00:20:19.969 do_verify=1 00:20:19.969 verify=crc32c-intel 00:20:19.969 [job0] 00:20:19.969 filename=/dev/nvme0n1 00:20:19.969 [job1] 00:20:19.969 filename=/dev/nvme0n2 00:20:19.969 [job2] 00:20:19.969 filename=/dev/nvme0n3 00:20:19.969 [job3] 00:20:19.969 filename=/dev/nvme0n4 00:20:19.969 Could not set queue depth (nvme0n1) 00:20:19.969 Could not set queue depth (nvme0n2) 00:20:19.969 Could not set queue depth (nvme0n3) 00:20:19.969 Could not set queue depth (nvme0n4) 00:20:19.969 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:19.969 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:19.969 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:19.969 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:19.969 fio-3.35 00:20:19.969 Starting 4 threads 00:20:21.349 00:20:21.349 job0: (groupid=0, jobs=1): err= 0: pid=1385460: Sat Dec 14 17:23:17 2024 00:20:21.349 read: IOPS=4820, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1001msec) 00:20:21.349 slat (nsec): min=8357, max=33161, avg=9006.35, stdev=984.62 00:20:21.349 clat (usec): min=59, max=164, avg=90.26, stdev=20.43 00:20:21.349 lat (usec): min=73, max=173, avg=99.27, stdev=20.50 00:20:21.349 clat percentiles (usec): 00:20:21.349 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 74], 00:20:21.349 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 84], 00:20:21.349 | 70.00th=[ 109], 80.00th=[ 116], 90.00th=[ 122], 95.00th=[ 126], 00:20:21.350 | 99.00th=[ 135], 99.50th=[ 139], 99.90th=[ 149], 99.95th=[ 153], 00:20:21.350 | 99.99th=[ 165] 00:20:21.350 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:20:21.350 slat (nsec): min=10869, max=41048, avg=11717.97, stdev=1436.69 00:20:21.350 clat (usec): min=58, max=155, avg=84.95, stdev=19.01 00:20:21.350 lat (usec): min=70, max=169, avg=96.67, stdev=19.03 00:20:21.350 clat percentiles (usec): 00:20:21.350 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:20:21.350 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 76], 60.00th=[ 79], 00:20:21.350 | 70.00th=[ 94], 80.00th=[ 110], 90.00th=[ 117], 95.00th=[ 121], 00:20:21.350 | 99.00th=[ 128], 99.50th=[ 133], 99.90th=[ 139], 99.95th=[ 151], 00:20:21.350 | 99.99th=[ 157] 00:20:21.350 bw ( KiB/s): min=24576, max=24576, per=38.75%, avg=24576.00, stdev= 0.00, samples=1 00:20:21.350 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:20:21.350 lat (usec) : 100=68.66%, 250=31.34% 00:20:21.350 cpu : usr=7.30%, sys=13.40%, ctx=9945, majf=0, minf=1 00:20:21.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.350 issued rwts: total=4825,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:21.350 job1: (groupid=0, jobs=1): err= 0: pid=1385461: Sat Dec 14 17:23:17 2024 00:20:21.350 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1001msec) 00:20:21.350 slat (nsec): min=8433, max=29372, avg=9040.43, stdev=889.71 00:20:21.350 clat (usec): min=65, max=212, avg=131.18, stdev=22.74 00:20:21.350 lat (usec): min=74, max=221, avg=140.22, stdev=22.85 00:20:21.350 clat percentiles (usec): 00:20:21.350 | 1.00th=[ 75], 5.00th=[ 103], 10.00th=[ 111], 20.00th=[ 115], 00:20:21.350 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 126], 60.00th=[ 137], 00:20:21.350 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 172], 00:20:21.350 | 99.00th=[ 198], 99.50th=[ 200], 99.90th=[ 206], 99.95th=[ 208], 00:20:21.350 | 99.99th=[ 212] 00:20:21.350 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:20:21.350 slat (nsec): min=10505, max=41008, avg=11556.83, stdev=1240.41 00:20:21.350 clat (usec): min=63, max=203, avg=123.71, stdev=21.95 00:20:21.350 lat (usec): min=74, max=215, avg=135.26, stdev=22.09 00:20:21.350 clat percentiles (usec): 00:20:21.350 | 1.00th=[ 71], 5.00th=[ 92], 10.00th=[ 102], 20.00th=[ 108], 00:20:21.350 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 122], 60.00th=[ 131], 00:20:21.350 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 151], 95.00th=[ 161], 00:20:21.350 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 194], 99.95th=[ 194], 00:20:21.350 | 99.99th=[ 204] 00:20:21.350 bw ( KiB/s): min=13960, max=13960, per=22.01%, avg=13960.00, stdev= 0.00, samples=1 00:20:21.350 iops : min= 3490, max= 3490, avg=3490.00, stdev= 0.00, samples=1 00:20:21.350 lat (usec) : 100=6.21%, 250=93.79% 00:20:21.350 cpu : usr=6.40%, sys=8.70%, ctx=7133, majf=0, minf=1 00:20:21.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.350 issued rwts: total=3549,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:21.350 job2: (groupid=0, jobs=1): err= 0: pid=1385462: Sat Dec 14 17:23:17 2024 00:20:21.350 read: IOPS=3544, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1001msec) 00:20:21.350 slat (nsec): min=8629, max=22806, avg=9207.07, stdev=908.28 00:20:21.350 clat (usec): min=70, max=207, avg=130.99, stdev=22.08 00:20:21.350 lat (usec): min=79, max=216, avg=140.20, stdev=22.15 00:20:21.350 clat percentiles (usec): 00:20:21.350 | 1.00th=[ 80], 5.00th=[ 104], 10.00th=[ 111], 20.00th=[ 115], 00:20:21.350 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 126], 60.00th=[ 137], 00:20:21.350 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 174], 00:20:21.350 | 99.00th=[ 194], 99.50th=[ 198], 99.90th=[ 202], 99.95th=[ 206], 00:20:21.350 | 99.99th=[ 208] 00:20:21.350 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:20:21.350 slat (nsec): min=10577, max=37250, avg=11723.00, stdev=1132.91 00:20:21.350 clat (usec): min=66, max=198, avg=123.52, stdev=21.32 00:20:21.350 lat (usec): min=78, max=210, avg=135.24, stdev=21.38 00:20:21.350 clat percentiles (usec): 00:20:21.350 | 1.00th=[ 73], 5.00th=[ 94], 10.00th=[ 102], 20.00th=[ 108], 00:20:21.350 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 123], 60.00th=[ 131], 00:20:21.350 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 151], 95.00th=[ 161], 00:20:21.350 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 194], 99.95th=[ 198], 00:20:21.350 | 99.99th=[ 200] 00:20:21.350 bw ( KiB/s): min=13944, max=13944, per=21.99%, avg=13944.00, stdev= 0.00, samples=1 00:20:21.350 iops : min= 3486, max= 3486, avg=3486.00, stdev= 0.00, samples=1 00:20:21.350 lat (usec) : 100=5.79%, 250=94.21% 00:20:21.350 cpu : usr=6.20%, sys=9.10%, ctx=7132, majf=0, minf=1 00:20:21.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.350 issued rwts: total=3548,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:21.350 job3: (groupid=0, jobs=1): err= 0: pid=1385463: Sat Dec 14 17:23:17 2024 00:20:21.350 read: IOPS=3501, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1001msec) 00:20:21.350 slat (nsec): min=8692, max=40663, avg=11111.73, stdev=3753.25 00:20:21.350 clat (usec): min=69, max=212, avg=129.61, stdev=23.08 00:20:21.350 lat (usec): min=78, max=221, avg=140.72, stdev=21.71 00:20:21.350 clat percentiles (usec): 00:20:21.350 | 1.00th=[ 83], 5.00th=[ 99], 10.00th=[ 104], 20.00th=[ 111], 00:20:21.350 | 30.00th=[ 115], 40.00th=[ 119], 50.00th=[ 126], 60.00th=[ 137], 00:20:21.350 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 172], 00:20:21.350 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 206], 99.95th=[ 210], 00:20:21.350 | 99.99th=[ 212] 00:20:21.350 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:20:21.350 slat (nsec): min=10683, max=40614, avg=13350.05, stdev=3706.20 00:20:21.350 clat (usec): min=68, max=192, avg=122.94, stdev=21.60 00:20:21.350 lat (usec): min=79, max=204, avg=136.29, stdev=20.39 00:20:21.350 clat percentiles (usec): 00:20:21.350 | 1.00th=[ 75], 5.00th=[ 92], 10.00th=[ 99], 20.00th=[ 105], 00:20:21.350 | 30.00th=[ 110], 40.00th=[ 114], 50.00th=[ 123], 60.00th=[ 131], 00:20:21.350 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 161], 00:20:21.350 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 190], 99.95th=[ 192], 00:20:21.350 | 99.99th=[ 194] 00:20:21.350 bw ( KiB/s): min=13984, max=13984, per=22.05%, avg=13984.00, stdev= 0.00, samples=1 00:20:21.350 iops : min= 3496, max= 3496, avg=3496.00, stdev= 0.00, samples=1 00:20:21.350 lat (usec) : 100=8.96%, 250=91.04% 00:20:21.350 cpu : usr=4.70%, sys=10.10%, ctx=7090, majf=0, minf=1 00:20:21.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.350 issued rwts: total=3505,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:21.350 00:20:21.350 Run status group 0 (all jobs): 00:20:21.350 READ: bw=60.2MiB/s (63.1MB/s), 13.7MiB/s-18.8MiB/s (14.3MB/s-19.7MB/s), io=60.3MiB (63.2MB), run=1001-1001msec 00:20:21.350 WRITE: bw=61.9MiB/s (64.9MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=62.0MiB (65.0MB), run=1001-1001msec 00:20:21.350 00:20:21.350 Disk stats (read/write): 00:20:21.350 nvme0n1: ios=4145/4571, merge=0/0, ticks=331/334, in_queue=665, util=84.77% 00:20:21.350 nvme0n2: ios=2764/3072, merge=0/0, ticks=342/358, in_queue=700, util=85.60% 00:20:21.350 nvme0n3: ios=2763/3072, merge=0/0, ticks=346/360, in_queue=706, util=88.58% 00:20:21.350 nvme0n4: ios=2742/3072, merge=0/0, ticks=339/354, in_queue=693, util=89.63% 00:20:21.350 17:23:17 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:21.350 [global] 00:20:21.350 thread=1 00:20:21.350 invalidate=1 00:20:21.350 rw=randwrite 00:20:21.350 time_based=1 00:20:21.350 runtime=1 00:20:21.350 ioengine=libaio 00:20:21.350 direct=1 00:20:21.350 bs=4096 00:20:21.350 iodepth=1 00:20:21.350 norandommap=0 00:20:21.350 numjobs=1 00:20:21.350 00:20:21.350 verify_dump=1 00:20:21.350 verify_backlog=512 00:20:21.350 verify_state_save=0 00:20:21.350 do_verify=1 00:20:21.350 verify=crc32c-intel 00:20:21.350 [job0] 00:20:21.350 filename=/dev/nvme0n1 00:20:21.350 [job1] 00:20:21.350 filename=/dev/nvme0n2 00:20:21.350 [job2] 00:20:21.350 filename=/dev/nvme0n3 00:20:21.350 [job3] 00:20:21.350 filename=/dev/nvme0n4 00:20:21.350 Could not set queue depth (nvme0n1) 00:20:21.350 Could not set queue depth (nvme0n2) 00:20:21.350 Could not set queue depth (nvme0n3) 00:20:21.350 Could not set queue depth (nvme0n4) 00:20:21.610 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.610 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.610 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.610 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.610 fio-3.35 00:20:21.610 Starting 4 threads 00:20:22.990 00:20:22.990 job0: (groupid=0, jobs=1): err= 0: pid=1385885: Sat Dec 14 17:23:19 2024 00:20:22.990 read: IOPS=4078, BW=15.9MiB/s (16.7MB/s)(15.9MiB/1000msec) 00:20:22.990 slat (nsec): min=8381, max=29392, avg=8953.71, stdev=891.95 00:20:22.990 clat (usec): min=69, max=357, avg=114.47, stdev=15.02 00:20:22.990 lat (usec): min=78, max=366, avg=123.43, stdev=15.06 00:20:22.990 clat percentiles (usec): 00:20:22.990 | 1.00th=[ 86], 5.00th=[ 95], 10.00th=[ 98], 20.00th=[ 104], 00:20:22.990 | 30.00th=[ 108], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 117], 00:20:22.990 | 70.00th=[ 120], 80.00th=[ 125], 90.00th=[ 133], 95.00th=[ 139], 00:20:22.990 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 194], 99.95th=[ 196], 00:20:22.990 | 99.99th=[ 359] 00:20:22.990 write: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1000msec); 0 zone resets 00:20:22.990 slat (nsec): min=10289, max=67008, avg=11236.96, stdev=1413.68 00:20:22.990 clat (usec): min=62, max=200, avg=104.68, stdev=12.84 00:20:22.990 lat (usec): min=76, max=211, avg=115.91, stdev=12.89 00:20:22.990 clat percentiles (usec): 00:20:22.990 | 1.00th=[ 78], 5.00th=[ 87], 10.00th=[ 90], 20.00th=[ 95], 00:20:22.990 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 104], 60.00th=[ 106], 00:20:22.990 | 70.00th=[ 110], 80.00th=[ 114], 90.00th=[ 121], 95.00th=[ 126], 00:20:22.990 | 99.00th=[ 147], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 180], 00:20:22.990 | 99.99th=[ 200] 00:20:22.990 bw ( KiB/s): min=17312, max=17312, per=23.50%, avg=17312.00, stdev= 0.00, samples=1 00:20:22.990 iops : min= 4328, max= 4328, avg=4328.00, stdev= 0.00, samples=1 00:20:22.990 lat (usec) : 100=24.33%, 250=75.65%, 500=0.01% 00:20:22.990 cpu : usr=7.40%, sys=10.10%, ctx=8175, majf=0, minf=1 00:20:22.990 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.990 issued rwts: total=4078,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.990 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:22.990 job1: (groupid=0, jobs=1): err= 0: pid=1385886: Sat Dec 14 17:23:19 2024 00:20:22.990 read: IOPS=4267, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1001msec) 00:20:22.990 slat (nsec): min=8456, max=28225, avg=9114.77, stdev=809.04 00:20:22.990 clat (usec): min=64, max=174, avg=103.04, stdev=16.46 00:20:22.990 lat (usec): min=73, max=183, avg=112.16, stdev=16.50 00:20:22.990 clat percentiles (usec): 00:20:22.990 | 1.00th=[ 71], 5.00th=[ 75], 10.00th=[ 78], 20.00th=[ 84], 00:20:22.990 | 30.00th=[ 97], 40.00th=[ 103], 50.00th=[ 108], 60.00th=[ 111], 00:20:22.990 | 70.00th=[ 114], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 125], 00:20:22.990 | 99.00th=[ 135], 99.50th=[ 147], 99.90th=[ 159], 99.95th=[ 161], 00:20:22.990 | 99.99th=[ 176] 00:20:22.990 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:20:22.990 slat (nsec): min=10494, max=36413, avg=11612.86, stdev=1302.77 00:20:22.991 clat (usec): min=62, max=199, avg=96.19, stdev=14.51 00:20:22.991 lat (usec): min=73, max=210, avg=107.80, stdev=14.59 00:20:22.991 clat percentiles (usec): 00:20:22.991 | 1.00th=[ 68], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 82], 00:20:22.991 | 30.00th=[ 91], 40.00th=[ 95], 50.00th=[ 99], 60.00th=[ 102], 00:20:22.991 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 113], 95.00th=[ 117], 00:20:22.991 | 99.00th=[ 128], 99.50th=[ 135], 99.90th=[ 147], 99.95th=[ 149], 00:20:22.991 | 99.99th=[ 200] 00:20:22.991 bw ( KiB/s): min=17208, max=17208, per=23.36%, avg=17208.00, stdev= 0.00, samples=1 00:20:22.991 iops : min= 4302, max= 4302, avg=4302.00, stdev= 0.00, samples=1 00:20:22.991 lat (usec) : 100=44.91%, 250=55.09% 00:20:22.991 cpu : usr=7.10%, sys=9.40%, ctx=8880, majf=0, minf=1 00:20:22.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.991 issued rwts: total=4272,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:22.991 job2: (groupid=0, jobs=1): err= 0: pid=1385887: Sat Dec 14 17:23:19 2024 00:20:22.991 read: IOPS=4573, BW=17.9MiB/s (18.7MB/s)(17.9MiB/1001msec) 00:20:22.991 slat (nsec): min=8488, max=33899, avg=9086.52, stdev=1057.93 00:20:22.991 clat (usec): min=71, max=300, avg=99.23, stdev=16.90 00:20:22.991 lat (usec): min=84, max=309, avg=108.32, stdev=16.92 00:20:22.991 clat percentiles (usec): 00:20:22.991 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 88], 00:20:22.991 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 96], 00:20:22.991 | 70.00th=[ 99], 80.00th=[ 105], 90.00th=[ 129], 95.00th=[ 137], 00:20:22.991 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 180], 99.95th=[ 182], 00:20:22.991 | 99.99th=[ 302] 00:20:22.991 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:20:22.991 slat (nsec): min=10310, max=40579, avg=11361.56, stdev=1062.80 00:20:22.991 clat (usec): min=68, max=179, avg=92.76, stdev=13.04 00:20:22.991 lat (usec): min=79, max=190, avg=104.12, stdev=13.12 00:20:22.991 clat percentiles (usec): 00:20:22.991 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:20:22.991 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 92], 00:20:22.991 | 70.00th=[ 94], 80.00th=[ 98], 90.00th=[ 113], 95.00th=[ 123], 00:20:22.991 | 99.00th=[ 139], 99.50th=[ 151], 99.90th=[ 161], 99.95th=[ 169], 00:20:22.991 | 99.99th=[ 180] 00:20:22.991 bw ( KiB/s): min=20480, max=20480, per=27.81%, avg=20480.00, stdev= 0.00, samples=1 00:20:22.991 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:20:22.991 lat (usec) : 100=77.82%, 250=22.15%, 500=0.02% 00:20:22.991 cpu : usr=8.00%, sys=11.60%, ctx=9186, majf=0, minf=1 00:20:22.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.991 issued rwts: total=4578,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:22.991 job3: (groupid=0, jobs=1): err= 0: pid=1385888: Sat Dec 14 17:23:19 2024 00:20:22.991 read: IOPS=5019, BW=19.6MiB/s (20.6MB/s)(19.6MiB/1001msec) 00:20:22.991 slat (nsec): min=8572, max=20654, avg=9151.23, stdev=873.95 00:20:22.991 clat (usec): min=71, max=133, avg=87.95, stdev= 6.65 00:20:22.991 lat (usec): min=79, max=142, avg=97.11, stdev= 6.72 00:20:22.991 clat percentiles (usec): 00:20:22.991 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 83], 00:20:22.991 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 89], 00:20:22.991 | 70.00th=[ 91], 80.00th=[ 93], 90.00th=[ 97], 95.00th=[ 100], 00:20:22.991 | 99.00th=[ 110], 99.50th=[ 114], 99.90th=[ 121], 99.95th=[ 128], 00:20:22.991 | 99.99th=[ 135] 00:20:22.991 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:20:22.991 slat (nsec): min=10430, max=38803, avg=11404.50, stdev=1084.76 00:20:22.991 clat (usec): min=60, max=123, avg=83.46, stdev= 6.11 00:20:22.991 lat (usec): min=77, max=153, avg=94.86, stdev= 6.23 00:20:22.991 clat percentiles (usec): 00:20:22.991 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:20:22.991 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 85], 00:20:22.991 | 70.00th=[ 86], 80.00th=[ 88], 90.00th=[ 92], 95.00th=[ 95], 00:20:22.991 | 99.00th=[ 102], 99.50th=[ 105], 99.90th=[ 115], 99.95th=[ 121], 00:20:22.991 | 99.99th=[ 124] 00:20:22.991 bw ( KiB/s): min=20480, max=20480, per=27.81%, avg=20480.00, stdev= 0.00, samples=1 00:20:22.991 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:20:22.991 lat (usec) : 100=96.72%, 250=3.28% 00:20:22.991 cpu : usr=7.50%, sys=14.00%, ctx=10145, majf=0, minf=1 00:20:22.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.991 issued rwts: total=5025,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:22.991 00:20:22.991 Run status group 0 (all jobs): 00:20:22.991 READ: bw=70.1MiB/s (73.5MB/s), 15.9MiB/s-19.6MiB/s (16.7MB/s-20.6MB/s), io=70.1MiB (73.5MB), run=1000-1001msec 00:20:22.991 WRITE: bw=71.9MiB/s (75.4MB/s), 16.0MiB/s-20.0MiB/s (16.8MB/s-20.9MB/s), io=72.0MiB (75.5MB), run=1000-1001msec 00:20:22.991 00:20:22.991 Disk stats (read/write): 00:20:22.991 nvme0n1: ios=3470/3584, merge=0/0, ticks=366/336, in_queue=702, util=84.47% 00:20:22.991 nvme0n2: ios=3478/3584, merge=0/0, ticks=363/339, in_queue=702, util=85.51% 00:20:22.991 nvme0n3: ios=3920/4096, merge=0/0, ticks=318/321, in_queue=639, util=88.49% 00:20:22.991 nvme0n4: ios=4096/4387, merge=0/0, ticks=325/325, in_queue=650, util=89.53% 00:20:22.991 17:23:19 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:22.991 [global] 00:20:22.991 thread=1 00:20:22.991 invalidate=1 00:20:22.991 rw=write 00:20:22.991 time_based=1 00:20:22.991 runtime=1 00:20:22.991 ioengine=libaio 00:20:22.991 direct=1 00:20:22.991 bs=4096 00:20:22.991 iodepth=128 00:20:22.991 norandommap=0 00:20:22.991 numjobs=1 00:20:22.991 00:20:22.991 verify_dump=1 00:20:22.991 verify_backlog=512 00:20:22.991 verify_state_save=0 00:20:22.991 do_verify=1 00:20:22.991 verify=crc32c-intel 00:20:22.991 [job0] 00:20:22.991 filename=/dev/nvme0n1 00:20:22.991 [job1] 00:20:22.991 filename=/dev/nvme0n2 00:20:22.991 [job2] 00:20:22.991 filename=/dev/nvme0n3 00:20:22.991 [job3] 00:20:22.991 filename=/dev/nvme0n4 00:20:22.991 Could not set queue depth (nvme0n1) 00:20:22.991 Could not set queue depth (nvme0n2) 00:20:22.991 Could not set queue depth (nvme0n3) 00:20:22.991 Could not set queue depth (nvme0n4) 00:20:23.258 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:23.259 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:23.259 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:23.259 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:23.259 fio-3.35 00:20:23.259 Starting 4 threads 00:20:24.660 00:20:24.660 job0: (groupid=0, jobs=1): err= 0: pid=1386319: Sat Dec 14 17:23:20 2024 00:20:24.660 read: IOPS=6259, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1003msec) 00:20:24.660 slat (usec): min=2, max=3005, avg=78.04, stdev=280.61 00:20:24.660 clat (usec): min=1850, max=18180, avg=9960.93, stdev=4570.11 00:20:24.660 lat (usec): min=2439, max=18797, avg=10038.98, stdev=4598.46 00:20:24.660 clat percentiles (usec): 00:20:24.660 | 1.00th=[ 5014], 5.00th=[ 5145], 10.00th=[ 5211], 20.00th=[ 5276], 00:20:24.660 | 30.00th=[ 5407], 40.00th=[ 6587], 50.00th=[ 8848], 60.00th=[12387], 00:20:24.660 | 70.00th=[14353], 80.00th=[14484], 90.00th=[15139], 95.00th=[17695], 00:20:24.660 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:20:24.660 | 99.99th=[18220] 00:20:24.660 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:20:24.660 slat (usec): min=2, max=2200, avg=74.02, stdev=262.36 00:20:24.660 clat (usec): min=4489, max=18414, avg=9659.22, stdev=4559.24 00:20:24.660 lat (usec): min=4492, max=18651, avg=9733.24, stdev=4588.73 00:20:24.660 clat percentiles (usec): 00:20:24.660 | 1.00th=[ 4817], 5.00th=[ 4817], 10.00th=[ 4883], 20.00th=[ 4948], 00:20:24.660 | 30.00th=[ 5080], 40.00th=[ 6259], 50.00th=[ 9896], 60.00th=[11207], 00:20:24.660 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14877], 95.00th=[17433], 00:20:24.660 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:20:24.660 | 99.99th=[18482] 00:20:24.660 bw ( KiB/s): min=21536, max=31712, per=25.82%, avg=26624.00, stdev=7195.52, samples=2 00:20:24.660 iops : min= 5384, max= 7928, avg=6656.00, stdev=1798.88, samples=2 00:20:24.660 lat (msec) : 2=0.01%, 4=0.21%, 10=51.31%, 20=48.47% 00:20:24.660 cpu : usr=3.49%, sys=3.19%, ctx=1851, majf=0, minf=1 00:20:24.660 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:24.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:24.660 issued rwts: total=6278,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.660 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:24.660 job1: (groupid=0, jobs=1): err= 0: pid=1386320: Sat Dec 14 17:23:20 2024 00:20:24.660 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:20:24.660 slat (usec): min=2, max=3075, avg=58.38, stdev=228.47 00:20:24.660 clat (usec): min=4001, max=18353, avg=7567.46, stdev=3406.36 00:20:24.660 lat (usec): min=4013, max=18357, avg=7625.84, stdev=3427.36 00:20:24.660 clat percentiles (usec): 00:20:24.660 | 1.00th=[ 4752], 5.00th=[ 5211], 10.00th=[ 5276], 20.00th=[ 5407], 00:20:24.660 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6521], 00:20:24.660 | 70.00th=[ 6652], 80.00th=[ 7177], 90.00th=[12911], 95.00th=[17957], 00:20:24.660 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18220], 99.95th=[18220], 00:20:24.660 | 99.99th=[18482] 00:20:24.660 write: IOPS=8644, BW=33.8MiB/s (35.4MB/s)(33.9MiB/1003msec); 0 zone resets 00:20:24.660 slat (usec): min=2, max=3674, avg=57.54, stdev=234.67 00:20:24.660 clat (usec): min=1861, max=18188, avg=7482.94, stdev=3718.58 00:20:24.660 lat (usec): min=2406, max=18196, avg=7540.49, stdev=3739.90 00:20:24.660 clat percentiles (usec): 00:20:24.660 | 1.00th=[ 4424], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5145], 00:20:24.660 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6128], 00:20:24.660 | 70.00th=[ 6259], 80.00th=[ 9896], 90.00th=[13698], 95.00th=[17695], 00:20:24.660 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:20:24.660 | 99.99th=[18220] 00:20:24.660 bw ( KiB/s): min=31480, max=36864, per=33.14%, avg=34172.00, stdev=3807.06, samples=2 00:20:24.660 iops : min= 7870, max= 9216, avg=8543.00, stdev=951.77, samples=2 00:20:24.660 lat (msec) : 2=0.01%, 4=0.17%, 10=81.47%, 20=18.36% 00:20:24.660 cpu : usr=2.89%, sys=5.49%, ctx=1754, majf=0, minf=2 00:20:24.660 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:24.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:24.660 issued rwts: total=8192,8670,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.660 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:24.660 job2: (groupid=0, jobs=1): err= 0: pid=1386321: Sat Dec 14 17:23:20 2024 00:20:24.660 read: IOPS=4227, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1003msec) 00:20:24.660 slat (usec): min=2, max=3880, avg=114.49, stdev=447.02 00:20:24.660 clat (usec): min=1419, max=18526, avg=14490.99, stdev=3404.96 00:20:24.660 lat (usec): min=1672, max=18529, avg=14605.48, stdev=3400.53 00:20:24.660 clat percentiles (usec): 00:20:24.660 | 1.00th=[ 3359], 5.00th=[ 5997], 10.00th=[10421], 20.00th=[13304], 00:20:24.660 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:20:24.660 | 70.00th=[16909], 80.00th=[17695], 90.00th=[17957], 95.00th=[18220], 00:20:24.660 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:20:24.660 | 99.99th=[18482] 00:20:24.660 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:20:24.660 slat (usec): min=2, max=3244, avg=108.16, stdev=431.19 00:20:24.661 clat (usec): min=6266, max=18747, avg=14191.73, stdev=2772.88 00:20:24.661 lat (usec): min=6270, max=18751, avg=14299.88, stdev=2765.57 00:20:24.661 clat percentiles (usec): 00:20:24.661 | 1.00th=[ 7177], 5.00th=[ 7898], 10.00th=[10945], 20.00th=[11994], 00:20:24.661 | 30.00th=[12780], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 00:20:24.661 | 70.00th=[14746], 80.00th=[17695], 90.00th=[17957], 95.00th=[17957], 00:20:24.661 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:20:24.661 | 99.99th=[18744] 00:20:24.661 bw ( KiB/s): min=16384, max=20480, per=17.87%, avg=18432.00, stdev=2896.31, samples=2 00:20:24.661 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:20:24.661 lat (msec) : 2=0.08%, 4=0.73%, 10=7.05%, 20=92.13% 00:20:24.661 cpu : usr=1.40%, sys=3.89%, ctx=2140, majf=0, minf=1 00:20:24.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:24.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:24.661 issued rwts: total=4240,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.661 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:24.661 job3: (groupid=0, jobs=1): err= 0: pid=1386322: Sat Dec 14 17:23:20 2024 00:20:24.661 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:20:24.661 slat (usec): min=2, max=3185, avg=83.42, stdev=313.31 00:20:24.661 clat (usec): min=6539, max=18411, avg=10804.35, stdev=3981.42 00:20:24.661 lat (usec): min=7102, max=18414, avg=10887.77, stdev=4001.50 00:20:24.661 clat percentiles (usec): 00:20:24.661 | 1.00th=[ 6980], 5.00th=[ 7504], 10.00th=[ 7832], 20.00th=[ 7898], 00:20:24.661 | 30.00th=[ 7963], 40.00th=[ 8029], 50.00th=[ 8094], 60.00th=[ 8291], 00:20:24.661 | 70.00th=[12649], 80.00th=[15008], 90.00th=[17957], 95.00th=[18220], 00:20:24.661 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:20:24.661 | 99.99th=[18482] 00:20:24.661 write: IOPS=5905, BW=23.1MiB/s (24.2MB/s)(23.1MiB/1003msec); 0 zone resets 00:20:24.661 slat (usec): min=2, max=3405, avg=86.93, stdev=330.21 00:20:24.661 clat (usec): min=1905, max=18770, avg=11149.79, stdev=4367.81 00:20:24.661 lat (usec): min=3423, max=18774, avg=11236.73, stdev=4389.66 00:20:24.661 clat percentiles (usec): 00:20:24.661 | 1.00th=[ 6521], 5.00th=[ 7308], 10.00th=[ 7504], 20.00th=[ 7570], 00:20:24.661 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[11338], 00:20:24.661 | 70.00th=[15008], 80.00th=[17433], 90.00th=[17695], 95.00th=[17957], 00:20:24.661 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:20:24.661 | 99.99th=[18744] 00:20:24.661 bw ( KiB/s): min=17696, max=28672, per=22.48%, avg=23184.00, stdev=7761.20, samples=2 00:20:24.661 iops : min= 4424, max= 7168, avg=5796.00, stdev=1940.30, samples=2 00:20:24.661 lat (msec) : 2=0.01%, 4=0.14%, 10=58.66%, 20=41.19% 00:20:24.661 cpu : usr=1.80%, sys=3.99%, ctx=1601, majf=0, minf=1 00:20:24.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:20:24.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:24.661 issued rwts: total=5632,5923,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.661 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:24.661 00:20:24.661 Run status group 0 (all jobs): 00:20:24.661 READ: bw=94.8MiB/s (99.4MB/s), 16.5MiB/s-31.9MiB/s (17.3MB/s-33.5MB/s), io=95.1MiB (99.7MB), run=1003-1003msec 00:20:24.661 WRITE: bw=101MiB/s (106MB/s), 17.9MiB/s-33.8MiB/s (18.8MB/s-35.4MB/s), io=101MiB (106MB), run=1003-1003msec 00:20:24.661 00:20:24.661 Disk stats (read/write): 00:20:24.661 nvme0n1: ios=4657/4926, merge=0/0, ticks=13153/13080, in_queue=26233, util=83.55% 00:20:24.661 nvme0n2: ios=7503/7680, merge=0/0, ticks=23551/22704, in_queue=46255, util=85.27% 00:20:24.661 nvme0n3: ios=3584/3775, merge=0/0, ticks=14171/15186, in_queue=29357, util=88.27% 00:20:24.661 nvme0n4: ios=4962/5120, merge=0/0, ticks=15400/16174, in_queue=31574, util=89.32% 00:20:24.661 17:23:21 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:24.661 [global] 00:20:24.661 thread=1 00:20:24.661 invalidate=1 00:20:24.661 rw=randwrite 00:20:24.661 time_based=1 00:20:24.661 runtime=1 00:20:24.661 ioengine=libaio 00:20:24.661 direct=1 00:20:24.661 bs=4096 00:20:24.661 iodepth=128 00:20:24.661 norandommap=0 00:20:24.661 numjobs=1 00:20:24.661 00:20:24.661 verify_dump=1 00:20:24.661 verify_backlog=512 00:20:24.661 verify_state_save=0 00:20:24.661 do_verify=1 00:20:24.661 verify=crc32c-intel 00:20:24.661 [job0] 00:20:24.661 filename=/dev/nvme0n1 00:20:24.661 [job1] 00:20:24.661 filename=/dev/nvme0n2 00:20:24.661 [job2] 00:20:24.661 filename=/dev/nvme0n3 00:20:24.661 [job3] 00:20:24.661 filename=/dev/nvme0n4 00:20:24.661 Could not set queue depth (nvme0n1) 00:20:24.661 Could not set queue depth (nvme0n2) 00:20:24.661 Could not set queue depth (nvme0n3) 00:20:24.661 Could not set queue depth (nvme0n4) 00:20:24.928 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:24.928 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:24.928 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:24.928 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:24.928 fio-3.35 00:20:24.928 Starting 4 threads 00:20:26.334 00:20:26.334 job0: (groupid=0, jobs=1): err= 0: pid=1386743: Sat Dec 14 17:23:22 2024 00:20:26.334 read: IOPS=6761, BW=26.4MiB/s (27.7MB/s)(26.5MiB/1004msec) 00:20:26.334 slat (usec): min=2, max=4745, avg=71.39, stdev=343.02 00:20:26.334 clat (usec): min=2648, max=26770, avg=9125.07, stdev=6006.32 00:20:26.334 lat (usec): min=4008, max=26783, avg=9196.46, stdev=6053.14 00:20:26.334 clat percentiles (usec): 00:20:26.334 | 1.00th=[ 4883], 5.00th=[ 5080], 10.00th=[ 5080], 20.00th=[ 5145], 00:20:26.334 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5932], 00:20:26.334 | 70.00th=[10552], 80.00th=[10814], 90.00th=[21890], 95.00th=[22414], 00:20:26.334 | 99.00th=[22676], 99.50th=[22676], 99.90th=[25822], 99.95th=[26084], 00:20:26.334 | 99.99th=[26870] 00:20:26.334 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:20:26.334 slat (usec): min=2, max=4712, avg=68.74, stdev=319.74 00:20:26.334 clat (usec): min=4616, max=26013, avg=9097.84, stdev=6528.84 00:20:26.334 lat (usec): min=4627, max=26019, avg=9166.58, stdev=6575.94 00:20:26.334 clat percentiles (usec): 00:20:26.334 | 1.00th=[ 4752], 5.00th=[ 4752], 10.00th=[ 4817], 20.00th=[ 4883], 00:20:26.334 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5473], 00:20:26.334 | 70.00th=[ 9896], 80.00th=[13829], 90.00th=[21890], 95.00th=[22152], 00:20:26.334 | 99.00th=[22676], 99.50th=[22938], 99.90th=[25822], 99.95th=[25822], 00:20:26.334 | 99.99th=[26084] 00:20:26.334 bw ( KiB/s): min=14304, max=43040, per=30.56%, avg=28672.00, stdev=20319.42, samples=2 00:20:26.334 iops : min= 3576, max=10760, avg=7168.00, stdev=5079.86, samples=2 00:20:26.334 lat (msec) : 4=0.01%, 10=67.87%, 20=14.94%, 50=17.18% 00:20:26.334 cpu : usr=2.59%, sys=5.48%, ctx=1430, majf=0, minf=1 00:20:26.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:26.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:26.335 issued rwts: total=6789,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:26.335 job1: (groupid=0, jobs=1): err= 0: pid=1386744: Sat Dec 14 17:23:22 2024 00:20:26.335 read: IOPS=7790, BW=30.4MiB/s (31.9MB/s)(30.5MiB/1001msec) 00:20:26.335 slat (nsec): min=1980, max=2098.1k, avg=60484.08, stdev=206440.85 00:20:26.335 clat (usec): min=355, max=19588, avg=7936.57, stdev=4343.74 00:20:26.335 lat (usec): min=392, max=19654, avg=7997.05, stdev=4376.82 00:20:26.335 clat percentiles (usec): 00:20:26.335 | 1.00th=[ 3556], 5.00th=[ 4883], 10.00th=[ 5014], 20.00th=[ 5145], 00:20:26.335 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5604], 00:20:26.335 | 70.00th=[10421], 80.00th=[10683], 90.00th=[16319], 95.00th=[17957], 00:20:26.335 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19268], 99.95th=[19530], 00:20:26.335 | 99.99th=[19530] 00:20:26.335 write: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec); 0 zone resets 00:20:26.335 slat (usec): min=2, max=2021, avg=59.86, stdev=193.78 00:20:26.335 clat (usec): min=4205, max=18696, avg=7887.37, stdev=4506.94 00:20:26.335 lat (usec): min=4235, max=19445, avg=7947.23, stdev=4541.19 00:20:26.335 clat percentiles (usec): 00:20:26.335 | 1.00th=[ 4490], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:20:26.335 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5407], 00:20:26.335 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[16319], 95.00th=[17433], 00:20:26.335 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18220], 99.95th=[18482], 00:20:26.335 | 99.99th=[18744] 00:20:26.335 bw ( KiB/s): min=18760, max=18760, per=19.99%, avg=18760.00, stdev= 0.00, samples=1 00:20:26.335 iops : min= 4690, max= 4690, avg=4690.00, stdev= 0.00, samples=1 00:20:26.335 lat (usec) : 500=0.01%, 1000=0.06% 00:20:26.335 lat (msec) : 2=0.10%, 4=0.33%, 10=71.14%, 20=28.35% 00:20:26.335 cpu : usr=5.40%, sys=7.80%, ctx=2192, majf=0, minf=1 00:20:26.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:26.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:26.335 issued rwts: total=7798,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:26.335 job2: (groupid=0, jobs=1): err= 0: pid=1386745: Sat Dec 14 17:23:22 2024 00:20:26.335 read: IOPS=4033, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1004msec) 00:20:26.335 slat (usec): min=2, max=2291, avg=121.82, stdev=292.65 00:20:26.335 clat (usec): min=3233, max=20309, avg=15703.63, stdev=2237.81 00:20:26.335 lat (usec): min=4723, max=21918, avg=15825.45, stdev=2252.63 00:20:26.335 clat percentiles (usec): 00:20:26.335 | 1.00th=[ 8717], 5.00th=[12518], 10.00th=[12518], 20.00th=[12911], 00:20:26.335 | 30.00th=[15401], 40.00th=[16188], 50.00th=[16450], 60.00th=[16712], 00:20:26.335 | 70.00th=[16909], 80.00th=[17171], 90.00th=[18220], 95.00th=[18482], 00:20:26.335 | 99.00th=[19006], 99.50th=[19268], 99.90th=[20317], 99.95th=[20317], 00:20:26.335 | 99.99th=[20317] 00:20:26.335 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:20:26.335 slat (usec): min=2, max=2316, avg=119.36, stdev=288.47 00:20:26.335 clat (usec): min=10741, max=18568, avg=15445.93, stdev=1965.76 00:20:26.335 lat (usec): min=10755, max=19042, avg=15565.28, stdev=1981.48 00:20:26.335 clat percentiles (usec): 00:20:26.335 | 1.00th=[11469], 5.00th=[11863], 10.00th=[12256], 20.00th=[12649], 00:20:26.335 | 30.00th=[15270], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:20:26.335 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17433], 95.00th=[17695], 00:20:26.335 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:20:26.335 | 99.99th=[18482] 00:20:26.335 bw ( KiB/s): min=16384, max=16384, per=17.46%, avg=16384.00, stdev= 0.00, samples=2 00:20:26.335 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:20:26.335 lat (msec) : 4=0.01%, 10=0.52%, 20=99.40%, 50=0.07% 00:20:26.335 cpu : usr=2.29%, sys=4.59%, ctx=1580, majf=0, minf=1 00:20:26.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:26.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:26.335 issued rwts: total=4050,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:26.335 job3: (groupid=0, jobs=1): err= 0: pid=1386746: Sat Dec 14 17:23:22 2024 00:20:26.335 read: IOPS=3680, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1004msec) 00:20:26.335 slat (usec): min=2, max=2344, avg=129.70, stdev=314.25 00:20:26.335 clat (usec): min=3232, max=24680, avg=16495.35, stdev=3497.68 00:20:26.335 lat (usec): min=3981, max=25377, avg=16625.05, stdev=3522.01 00:20:26.335 clat percentiles (usec): 00:20:26.335 | 1.00th=[ 8586], 5.00th=[11994], 10.00th=[12518], 20.00th=[12780], 00:20:26.335 | 30.00th=[15401], 40.00th=[16319], 50.00th=[16712], 60.00th=[16909], 00:20:26.335 | 70.00th=[17433], 80.00th=[18482], 90.00th=[22414], 95.00th=[23200], 00:20:26.335 | 99.00th=[23987], 99.50th=[23987], 99.90th=[24511], 99.95th=[24511], 00:20:26.335 | 99.99th=[24773] 00:20:26.335 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:20:26.335 slat (usec): min=2, max=5920, avg=122.68, stdev=333.44 00:20:26.335 clat (usec): min=10692, max=24428, avg=16085.73, stdev=2733.44 00:20:26.335 lat (usec): min=10695, max=25367, avg=16208.42, stdev=2754.03 00:20:26.335 clat percentiles (usec): 00:20:26.335 | 1.00th=[11469], 5.00th=[11863], 10.00th=[12256], 20.00th=[12518], 00:20:26.335 | 30.00th=[15664], 40.00th=[16319], 50.00th=[16581], 60.00th=[16909], 00:20:26.335 | 70.00th=[17171], 80.00th=[17433], 90.00th=[19268], 95.00th=[21627], 00:20:26.335 | 99.00th=[22938], 99.50th=[22938], 99.90th=[23725], 99.95th=[24511], 00:20:26.335 | 99.99th=[24511] 00:20:26.335 bw ( KiB/s): min=16256, max=16384, per=17.39%, avg=16320.00, stdev=90.51, samples=2 00:20:26.335 iops : min= 4064, max= 4096, avg=4080.00, stdev=22.63, samples=2 00:20:26.335 lat (msec) : 4=0.04%, 10=0.54%, 20=88.32%, 50=11.10% 00:20:26.335 cpu : usr=2.69%, sys=4.19%, ctx=1519, majf=0, minf=1 00:20:26.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:26.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:26.335 issued rwts: total=3695,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:26.335 00:20:26.335 Run status group 0 (all jobs): 00:20:26.335 READ: bw=86.9MiB/s (91.1MB/s), 14.4MiB/s-30.4MiB/s (15.1MB/s-31.9MB/s), io=87.2MiB (91.5MB), run=1001-1004msec 00:20:26.335 WRITE: bw=91.6MiB/s (96.1MB/s), 15.9MiB/s-32.0MiB/s (16.7MB/s-33.5MB/s), io=92.0MiB (96.5MB), run=1001-1004msec 00:20:26.335 00:20:26.335 Disk stats (read/write): 00:20:26.335 nvme0n1: ios=6377/6656, merge=0/0, ticks=13141/13159, in_queue=26300, util=84.87% 00:20:26.335 nvme0n2: ios=5800/6144, merge=0/0, ticks=14376/15273, in_queue=29649, util=85.52% 00:20:26.335 nvme0n3: ios=3305/3584, merge=0/0, ticks=16908/17944, in_queue=34852, util=88.50% 00:20:26.335 nvme0n4: ios=3072/3466, merge=0/0, ticks=15894/17425, in_queue=33319, util=89.34% 00:20:26.335 17:23:22 -- target/fio.sh@55 -- # sync 00:20:26.335 17:23:22 -- target/fio.sh@59 -- # fio_pid=1387011 00:20:26.335 17:23:22 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:26.335 17:23:22 -- target/fio.sh@61 -- # sleep 3 00:20:26.335 [global] 00:20:26.335 thread=1 00:20:26.335 invalidate=1 00:20:26.335 rw=read 00:20:26.335 time_based=1 00:20:26.335 runtime=10 00:20:26.335 ioengine=libaio 00:20:26.335 direct=1 00:20:26.335 bs=4096 00:20:26.335 iodepth=1 00:20:26.335 norandommap=1 00:20:26.335 numjobs=1 00:20:26.335 00:20:26.335 [job0] 00:20:26.335 filename=/dev/nvme0n1 00:20:26.335 [job1] 00:20:26.335 filename=/dev/nvme0n2 00:20:26.335 [job2] 00:20:26.335 filename=/dev/nvme0n3 00:20:26.335 [job3] 00:20:26.335 filename=/dev/nvme0n4 00:20:26.335 Could not set queue depth (nvme0n1) 00:20:26.335 Could not set queue depth (nvme0n2) 00:20:26.335 Could not set queue depth (nvme0n3) 00:20:26.335 Could not set queue depth (nvme0n4) 00:20:26.596 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:26.596 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:26.596 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:26.596 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:26.596 fio-3.35 00:20:26.596 Starting 4 threads 00:20:29.132 17:23:25 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:29.391 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=78282752, buflen=4096 00:20:29.391 fio: pid=1387181, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:29.391 17:23:25 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:29.391 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=84672512, buflen=4096 00:20:29.391 fio: pid=1387180, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:29.391 17:23:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:29.391 17:23:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:29.651 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=35942400, buflen=4096 00:20:29.651 fio: pid=1387177, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:29.651 17:23:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:29.651 17:23:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:29.910 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=45981696, buflen=4096 00:20:29.910 fio: pid=1387179, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:29.910 17:23:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:29.910 17:23:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:29.910 00:20:29.910 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1387177: Sat Dec 14 17:23:26 2024 00:20:29.910 read: IOPS=8403, BW=32.8MiB/s (34.4MB/s)(98.3MiB/2994msec) 00:20:29.910 slat (usec): min=7, max=15919, avg=10.39, stdev=130.50 00:20:29.910 clat (usec): min=50, max=8574, avg=106.70, stdev=55.35 00:20:29.910 lat (usec): min=59, max=16050, avg=117.09, stdev=141.68 00:20:29.910 clat percentiles (usec): 00:20:29.910 | 1.00th=[ 63], 5.00th=[ 76], 10.00th=[ 81], 20.00th=[ 100], 00:20:29.910 | 30.00th=[ 104], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 112], 00:20:29.910 | 70.00th=[ 114], 80.00th=[ 117], 90.00th=[ 120], 95.00th=[ 123], 00:20:29.910 | 99.00th=[ 141], 99.50th=[ 147], 99.90th=[ 161], 99.95th=[ 186], 00:20:29.910 | 99.99th=[ 212] 00:20:29.910 bw ( KiB/s): min=32976, max=33136, per=28.64%, avg=33064.00, stdev=75.47, samples=5 00:20:29.910 iops : min= 8244, max= 8284, avg=8266.00, stdev=18.87, samples=5 00:20:29.910 lat (usec) : 100=20.26%, 250=79.73%, 500=0.01% 00:20:29.910 lat (msec) : 10=0.01% 00:20:29.910 cpu : usr=4.24%, sys=11.59%, ctx=25166, majf=0, minf=1 00:20:29.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.910 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.911 issued rwts: total=25160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:29.911 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1387179: Sat Dec 14 17:23:26 2024 00:20:29.911 read: IOPS=8609, BW=33.6MiB/s (35.3MB/s)(108MiB/3207msec) 00:20:29.911 slat (usec): min=6, max=16030, avg=11.10, stdev=157.87 00:20:29.911 clat (usec): min=36, max=8757, avg=102.73, stdev=55.51 00:20:29.911 lat (usec): min=58, max=16120, avg=113.83, stdev=167.28 00:20:29.911 clat percentiles (usec): 00:20:29.911 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 67], 20.00th=[ 93], 00:20:29.911 | 30.00th=[ 102], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:20:29.911 | 70.00th=[ 114], 80.00th=[ 116], 90.00th=[ 119], 95.00th=[ 123], 00:20:29.911 | 99.00th=[ 139], 99.50th=[ 147], 99.90th=[ 157], 99.95th=[ 186], 00:20:29.911 | 99.99th=[ 231] 00:20:29.911 bw ( KiB/s): min=32968, max=37982, per=29.35%, avg=33886.33, stdev=2007.84, samples=6 00:20:29.911 iops : min= 8242, max= 9495, avg=8471.50, stdev=501.76, samples=6 00:20:29.911 lat (usec) : 50=0.01%, 100=26.01%, 250=73.97%, 500=0.01% 00:20:29.911 lat (msec) : 10=0.01% 00:20:29.911 cpu : usr=4.52%, sys=11.79%, ctx=27619, majf=0, minf=1 00:20:29.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.911 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.911 issued rwts: total=27611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:29.911 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1387180: Sat Dec 14 17:23:26 2024 00:20:29.911 read: IOPS=7380, BW=28.8MiB/s (30.2MB/s)(80.8MiB/2801msec) 00:20:29.911 slat (usec): min=6, max=11970, avg=10.32, stdev=98.82 00:20:29.911 clat (usec): min=64, max=8602, avg=123.20, stdev=60.77 00:20:29.911 lat (usec): min=78, max=12065, avg=133.52, stdev=115.78 00:20:29.911 clat percentiles (usec): 00:20:29.911 | 1.00th=[ 78], 5.00th=[ 86], 10.00th=[ 110], 20.00th=[ 119], 00:20:29.911 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 127], 00:20:29.911 | 70.00th=[ 129], 80.00th=[ 131], 90.00th=[ 135], 95.00th=[ 139], 00:20:29.911 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 180], 00:20:29.911 | 99.99th=[ 194] 00:20:29.911 bw ( KiB/s): min=29208, max=29480, per=25.43%, avg=29360.00, stdev=98.79, samples=5 00:20:29.911 iops : min= 7302, max= 7370, avg=7340.00, stdev=24.70, samples=5 00:20:29.911 lat (usec) : 100=8.36%, 250=91.63% 00:20:29.911 lat (msec) : 10=0.01% 00:20:29.911 cpu : usr=3.64%, sys=10.71%, ctx=20675, majf=0, minf=2 00:20:29.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.911 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.911 issued rwts: total=20673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:29.911 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1387181: Sat Dec 14 17:23:26 2024 00:20:29.911 read: IOPS=7303, BW=28.5MiB/s (29.9MB/s)(74.7MiB/2617msec) 00:20:29.911 slat (nsec): min=8437, max=36742, avg=9121.93, stdev=852.07 00:20:29.911 clat (usec): min=68, max=182, avg=125.81, stdev=10.92 00:20:29.911 lat (usec): min=78, max=191, avg=134.93, stdev=10.92 00:20:29.911 clat percentiles (usec): 00:20:29.911 | 1.00th=[ 87], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 121], 00:20:29.911 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 128], 00:20:29.911 | 70.00th=[ 129], 80.00th=[ 131], 90.00th=[ 135], 95.00th=[ 139], 00:20:29.911 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 180], 00:20:29.911 | 99.99th=[ 182] 00:20:29.911 bw ( KiB/s): min=29208, max=29472, per=25.44%, avg=29368.00, stdev=96.99, samples=5 00:20:29.911 iops : min= 7302, max= 7368, avg=7342.00, stdev=24.25, samples=5 00:20:29.911 lat (usec) : 100=2.92%, 250=97.07% 00:20:29.911 cpu : usr=3.10%, sys=10.97%, ctx=19113, majf=0, minf=2 00:20:29.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.911 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.911 issued rwts: total=19113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:29.911 00:20:29.911 Run status group 0 (all jobs): 00:20:29.911 READ: bw=113MiB/s (118MB/s), 28.5MiB/s-33.6MiB/s (29.9MB/s-35.3MB/s), io=362MiB (379MB), run=2617-3207msec 00:20:29.911 00:20:29.911 Disk stats (read/write): 00:20:29.911 nvme0n1: ios=23441/0, merge=0/0, ticks=2338/0, in_queue=2338, util=93.35% 00:20:29.911 nvme0n2: ios=26202/0, merge=0/0, ticks=2512/0, in_queue=2512, util=93.93% 00:20:29.911 nvme0n3: ios=18958/0, merge=0/0, ticks=2166/0, in_queue=2166, util=96.07% 00:20:29.911 nvme0n4: ios=18964/0, merge=0/0, ticks=2208/0, in_queue=2208, util=96.46% 00:20:30.170 17:23:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:30.170 17:23:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:30.430 17:23:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:30.430 17:23:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:30.430 17:23:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:30.430 17:23:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:30.690 17:23:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:30.690 17:23:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:30.949 17:23:27 -- target/fio.sh@69 -- # fio_status=0 00:20:30.949 17:23:27 -- target/fio.sh@70 -- # wait 1387011 00:20:30.949 17:23:27 -- target/fio.sh@70 -- # fio_status=4 00:20:30.949 17:23:27 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:31.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:31.887 17:23:28 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:31.887 17:23:28 -- common/autotest_common.sh@1208 -- # local i=0 00:20:31.887 17:23:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:31.887 17:23:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:31.887 17:23:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:31.887 17:23:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:31.887 17:23:28 -- common/autotest_common.sh@1220 -- # return 0 00:20:31.887 17:23:28 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:31.887 17:23:28 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:31.887 nvmf hotplug test: fio failed as expected 00:20:31.887 17:23:28 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:32.146 17:23:28 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:32.146 17:23:28 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:32.146 17:23:28 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:32.146 17:23:28 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:32.146 17:23:28 -- target/fio.sh@91 -- # nvmftestfini 00:20:32.146 17:23:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:32.146 17:23:28 -- nvmf/common.sh@116 -- # sync 00:20:32.146 17:23:28 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:32.146 17:23:28 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:32.146 17:23:28 -- nvmf/common.sh@119 -- # set +e 00:20:32.146 17:23:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:32.146 17:23:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:32.146 rmmod nvme_rdma 00:20:32.146 rmmod nvme_fabrics 00:20:32.146 17:23:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:32.146 17:23:28 -- nvmf/common.sh@123 -- # set -e 00:20:32.146 17:23:28 -- nvmf/common.sh@124 -- # return 0 00:20:32.146 17:23:28 -- nvmf/common.sh@477 -- # '[' -n 1383934 ']' 00:20:32.146 17:23:28 -- nvmf/common.sh@478 -- # killprocess 1383934 00:20:32.146 17:23:28 -- common/autotest_common.sh@936 -- # '[' -z 1383934 ']' 00:20:32.146 17:23:28 -- common/autotest_common.sh@940 -- # kill -0 1383934 00:20:32.146 17:23:28 -- common/autotest_common.sh@941 -- # uname 00:20:32.146 17:23:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.146 17:23:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1383934 00:20:32.146 17:23:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:32.146 17:23:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:32.146 17:23:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1383934' 00:20:32.146 killing process with pid 1383934 00:20:32.146 17:23:28 -- common/autotest_common.sh@955 -- # kill 1383934 00:20:32.146 17:23:28 -- common/autotest_common.sh@960 -- # wait 1383934 00:20:32.405 17:23:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:32.405 17:23:28 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:32.405 00:20:32.405 real 0m26.793s 00:20:32.405 user 2m9.144s 00:20:32.405 sys 0m10.090s 00:20:32.405 17:23:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:32.405 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:20:32.405 ************************************ 00:20:32.405 END TEST nvmf_fio_target 00:20:32.405 ************************************ 00:20:32.405 17:23:29 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:32.405 17:23:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:32.405 17:23:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:32.405 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:20:32.405 ************************************ 00:20:32.405 START TEST nvmf_bdevio 00:20:32.405 ************************************ 00:20:32.405 17:23:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:32.665 * Looking for test storage... 00:20:32.666 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:32.666 17:23:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:32.666 17:23:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:32.666 17:23:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:32.666 17:23:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:32.666 17:23:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:32.666 17:23:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:32.666 17:23:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:32.666 17:23:29 -- scripts/common.sh@335 -- # IFS=.-: 00:20:32.666 17:23:29 -- scripts/common.sh@335 -- # read -ra ver1 00:20:32.666 17:23:29 -- scripts/common.sh@336 -- # IFS=.-: 00:20:32.666 17:23:29 -- scripts/common.sh@336 -- # read -ra ver2 00:20:32.666 17:23:29 -- scripts/common.sh@337 -- # local 'op=<' 00:20:32.666 17:23:29 -- scripts/common.sh@339 -- # ver1_l=2 00:20:32.666 17:23:29 -- scripts/common.sh@340 -- # ver2_l=1 00:20:32.666 17:23:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:32.666 17:23:29 -- scripts/common.sh@343 -- # case "$op" in 00:20:32.666 17:23:29 -- scripts/common.sh@344 -- # : 1 00:20:32.666 17:23:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:32.666 17:23:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:32.666 17:23:29 -- scripts/common.sh@364 -- # decimal 1 00:20:32.666 17:23:29 -- scripts/common.sh@352 -- # local d=1 00:20:32.666 17:23:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:32.666 17:23:29 -- scripts/common.sh@354 -- # echo 1 00:20:32.666 17:23:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:32.666 17:23:29 -- scripts/common.sh@365 -- # decimal 2 00:20:32.666 17:23:29 -- scripts/common.sh@352 -- # local d=2 00:20:32.666 17:23:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:32.666 17:23:29 -- scripts/common.sh@354 -- # echo 2 00:20:32.666 17:23:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:32.666 17:23:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:32.666 17:23:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:32.666 17:23:29 -- scripts/common.sh@367 -- # return 0 00:20:32.666 17:23:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:32.666 17:23:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:32.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.666 --rc genhtml_branch_coverage=1 00:20:32.666 --rc genhtml_function_coverage=1 00:20:32.666 --rc genhtml_legend=1 00:20:32.666 --rc geninfo_all_blocks=1 00:20:32.666 --rc geninfo_unexecuted_blocks=1 00:20:32.666 00:20:32.666 ' 00:20:32.666 17:23:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:32.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.666 --rc genhtml_branch_coverage=1 00:20:32.666 --rc genhtml_function_coverage=1 00:20:32.666 --rc genhtml_legend=1 00:20:32.666 --rc geninfo_all_blocks=1 00:20:32.666 --rc geninfo_unexecuted_blocks=1 00:20:32.666 00:20:32.666 ' 00:20:32.666 17:23:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:32.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.666 --rc genhtml_branch_coverage=1 00:20:32.666 --rc genhtml_function_coverage=1 00:20:32.666 --rc genhtml_legend=1 00:20:32.666 --rc geninfo_all_blocks=1 00:20:32.666 --rc geninfo_unexecuted_blocks=1 00:20:32.666 00:20:32.666 ' 00:20:32.666 17:23:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:32.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.666 --rc genhtml_branch_coverage=1 00:20:32.666 --rc genhtml_function_coverage=1 00:20:32.666 --rc genhtml_legend=1 00:20:32.666 --rc geninfo_all_blocks=1 00:20:32.666 --rc geninfo_unexecuted_blocks=1 00:20:32.666 00:20:32.666 ' 00:20:32.666 17:23:29 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:32.666 17:23:29 -- nvmf/common.sh@7 -- # uname -s 00:20:32.666 17:23:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.666 17:23:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.666 17:23:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.666 17:23:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.666 17:23:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.666 17:23:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.666 17:23:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.666 17:23:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.666 17:23:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.666 17:23:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.666 17:23:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:32.666 17:23:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:32.666 17:23:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.666 17:23:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.666 17:23:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:32.666 17:23:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:32.666 17:23:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.666 17:23:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.666 17:23:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.666 17:23:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.666 17:23:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.666 17:23:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.666 17:23:29 -- paths/export.sh@5 -- # export PATH 00:20:32.666 17:23:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.666 17:23:29 -- nvmf/common.sh@46 -- # : 0 00:20:32.666 17:23:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:32.666 17:23:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:32.666 17:23:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:32.666 17:23:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.666 17:23:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.666 17:23:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:32.666 17:23:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:32.666 17:23:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:32.666 17:23:29 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:32.666 17:23:29 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:32.666 17:23:29 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:32.666 17:23:29 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:32.666 17:23:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.666 17:23:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:32.666 17:23:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:32.666 17:23:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:32.666 17:23:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.666 17:23:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.666 17:23:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.666 17:23:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:32.666 17:23:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:32.666 17:23:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:32.666 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:20:39.241 17:23:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:39.241 17:23:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:39.241 17:23:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:39.241 17:23:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:39.241 17:23:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:39.241 17:23:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:39.241 17:23:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:39.241 17:23:35 -- nvmf/common.sh@294 -- # net_devs=() 00:20:39.241 17:23:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:39.241 17:23:35 -- nvmf/common.sh@295 -- # e810=() 00:20:39.241 17:23:35 -- nvmf/common.sh@295 -- # local -ga e810 00:20:39.241 17:23:35 -- nvmf/common.sh@296 -- # x722=() 00:20:39.241 17:23:35 -- nvmf/common.sh@296 -- # local -ga x722 00:20:39.241 17:23:35 -- nvmf/common.sh@297 -- # mlx=() 00:20:39.241 17:23:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:39.241 17:23:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.241 17:23:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.241 17:23:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.241 17:23:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.241 17:23:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.241 17:23:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.241 17:23:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.241 17:23:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.241 17:23:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.241 17:23:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.241 17:23:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.241 17:23:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:39.241 17:23:35 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:39.241 17:23:35 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:39.241 17:23:35 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:39.241 17:23:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:39.241 17:23:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:39.241 17:23:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:39.241 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:39.241 17:23:35 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:39.241 17:23:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:39.241 17:23:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:39.241 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:39.241 17:23:35 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:39.241 17:23:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:39.241 17:23:35 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:39.241 17:23:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.241 17:23:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:39.241 17:23:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.241 17:23:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:39.241 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:39.241 17:23:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.241 17:23:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:39.241 17:23:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.241 17:23:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:39.241 17:23:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.241 17:23:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:39.241 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:39.241 17:23:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.241 17:23:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:39.241 17:23:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:39.241 17:23:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:39.241 17:23:35 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:39.241 17:23:35 -- nvmf/common.sh@57 -- # uname 00:20:39.241 17:23:35 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:39.241 17:23:35 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:39.241 17:23:35 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:39.241 17:23:35 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:39.241 17:23:35 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:39.241 17:23:35 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:39.241 17:23:35 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:39.241 17:23:35 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:39.241 17:23:35 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:39.241 17:23:35 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:39.241 17:23:35 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:39.241 17:23:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:39.241 17:23:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:39.241 17:23:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:39.241 17:23:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:39.241 17:23:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:39.241 17:23:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:39.241 17:23:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.241 17:23:35 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:39.241 17:23:35 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:39.241 17:23:35 -- nvmf/common.sh@104 -- # continue 2 00:20:39.242 17:23:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:39.242 17:23:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.242 17:23:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:39.242 17:23:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.242 17:23:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:39.242 17:23:35 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:39.242 17:23:35 -- nvmf/common.sh@104 -- # continue 2 00:20:39.242 17:23:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:39.242 17:23:35 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:39.242 17:23:35 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:39.242 17:23:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:39.242 17:23:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:39.242 17:23:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:39.242 17:23:35 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:39.242 17:23:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:39.242 17:23:35 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:39.242 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:39.242 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:39.242 altname enp217s0f0np0 00:20:39.242 altname ens818f0np0 00:20:39.242 inet 192.168.100.8/24 scope global mlx_0_0 00:20:39.242 valid_lft forever preferred_lft forever 00:20:39.242 17:23:35 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:39.242 17:23:35 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:39.242 17:23:35 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:39.242 17:23:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:39.242 17:23:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:39.242 17:23:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:39.242 17:23:35 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:39.242 17:23:35 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:39.242 17:23:35 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:39.242 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:39.242 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:39.242 altname enp217s0f1np1 00:20:39.242 altname ens818f1np1 00:20:39.242 inet 192.168.100.9/24 scope global mlx_0_1 00:20:39.242 valid_lft forever preferred_lft forever 00:20:39.242 17:23:35 -- nvmf/common.sh@410 -- # return 0 00:20:39.242 17:23:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:39.242 17:23:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:39.242 17:23:35 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:39.242 17:23:35 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:39.242 17:23:35 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:39.242 17:23:35 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:39.242 17:23:35 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:39.242 17:23:35 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:39.242 17:23:35 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:39.242 17:23:35 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:39.242 17:23:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:39.242 17:23:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.242 17:23:35 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:39.242 17:23:35 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:39.242 17:23:35 -- nvmf/common.sh@104 -- # continue 2 00:20:39.242 17:23:35 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:39.242 17:23:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.242 17:23:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:39.242 17:23:35 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.242 17:23:35 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:39.242 17:23:35 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:39.242 17:23:35 -- nvmf/common.sh@104 -- # continue 2 00:20:39.242 17:23:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:39.242 17:23:35 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:39.242 17:23:35 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:39.242 17:23:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:39.242 17:23:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:39.242 17:23:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:39.242 17:23:35 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:39.242 17:23:35 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:39.242 17:23:35 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:39.242 17:23:35 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:39.242 17:23:35 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:39.242 17:23:35 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:39.242 17:23:35 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:39.242 192.168.100.9' 00:20:39.242 17:23:35 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:39.242 192.168.100.9' 00:20:39.242 17:23:35 -- nvmf/common.sh@445 -- # head -n 1 00:20:39.242 17:23:35 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:39.242 17:23:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:39.242 192.168.100.9' 00:20:39.242 17:23:35 -- nvmf/common.sh@446 -- # head -n 1 00:20:39.242 17:23:35 -- nvmf/common.sh@446 -- # tail -n +2 00:20:39.242 17:23:35 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:39.242 17:23:35 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:39.242 17:23:35 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:39.242 17:23:35 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:39.242 17:23:35 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:39.242 17:23:35 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:39.502 17:23:35 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:39.502 17:23:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:39.502 17:23:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:39.502 17:23:35 -- common/autotest_common.sh@10 -- # set +x 00:20:39.502 17:23:35 -- nvmf/common.sh@469 -- # nvmfpid=1391469 00:20:39.502 17:23:35 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:39.502 17:23:35 -- nvmf/common.sh@470 -- # waitforlisten 1391469 00:20:39.502 17:23:35 -- common/autotest_common.sh@829 -- # '[' -z 1391469 ']' 00:20:39.502 17:23:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.502 17:23:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.502 17:23:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.502 17:23:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.502 17:23:35 -- common/autotest_common.sh@10 -- # set +x 00:20:39.502 [2024-12-14 17:23:35.993280] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:39.502 [2024-12-14 17:23:35.993333] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.502 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.502 [2024-12-14 17:23:36.068436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.502 [2024-12-14 17:23:36.105993] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:39.502 [2024-12-14 17:23:36.106109] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.502 [2024-12-14 17:23:36.106119] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.502 [2024-12-14 17:23:36.106132] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.502 [2024-12-14 17:23:36.106253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:39.502 [2024-12-14 17:23:36.106386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:39.502 [2024-12-14 17:23:36.106494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.502 [2024-12-14 17:23:36.106515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:40.440 17:23:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.440 17:23:36 -- common/autotest_common.sh@862 -- # return 0 00:20:40.440 17:23:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:40.440 17:23:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.440 17:23:36 -- common/autotest_common.sh@10 -- # set +x 00:20:40.440 17:23:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.440 17:23:36 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:40.440 17:23:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.440 17:23:36 -- common/autotest_common.sh@10 -- # set +x 00:20:40.440 [2024-12-14 17:23:36.882678] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x118f9b0/0x1193e80) succeed. 00:20:40.440 [2024-12-14 17:23:36.893394] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1190f50/0x11d5520) succeed. 00:20:40.440 17:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.440 17:23:37 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:40.440 17:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.440 17:23:37 -- common/autotest_common.sh@10 -- # set +x 00:20:40.440 Malloc0 00:20:40.440 17:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.440 17:23:37 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.440 17:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.440 17:23:37 -- common/autotest_common.sh@10 -- # set +x 00:20:40.440 17:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.440 17:23:37 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:40.440 17:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.440 17:23:37 -- common/autotest_common.sh@10 -- # set +x 00:20:40.440 17:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.440 17:23:37 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:40.440 17:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.441 17:23:37 -- common/autotest_common.sh@10 -- # set +x 00:20:40.441 [2024-12-14 17:23:37.061931] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:40.441 17:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.441 17:23:37 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:40.441 17:23:37 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:40.441 17:23:37 -- nvmf/common.sh@520 -- # config=() 00:20:40.441 17:23:37 -- nvmf/common.sh@520 -- # local subsystem config 00:20:40.441 17:23:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:40.441 17:23:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:40.441 { 00:20:40.441 "params": { 00:20:40.441 "name": "Nvme$subsystem", 00:20:40.441 "trtype": "$TEST_TRANSPORT", 00:20:40.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.441 "adrfam": "ipv4", 00:20:40.441 "trsvcid": "$NVMF_PORT", 00:20:40.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.441 "hdgst": ${hdgst:-false}, 00:20:40.441 "ddgst": ${ddgst:-false} 00:20:40.441 }, 00:20:40.441 "method": "bdev_nvme_attach_controller" 00:20:40.441 } 00:20:40.441 EOF 00:20:40.441 )") 00:20:40.441 17:23:37 -- nvmf/common.sh@542 -- # cat 00:20:40.441 17:23:37 -- nvmf/common.sh@544 -- # jq . 00:20:40.441 17:23:37 -- nvmf/common.sh@545 -- # IFS=, 00:20:40.441 17:23:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:40.441 "params": { 00:20:40.441 "name": "Nvme1", 00:20:40.441 "trtype": "rdma", 00:20:40.441 "traddr": "192.168.100.8", 00:20:40.441 "adrfam": "ipv4", 00:20:40.441 "trsvcid": "4420", 00:20:40.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.441 "hdgst": false, 00:20:40.441 "ddgst": false 00:20:40.441 }, 00:20:40.441 "method": "bdev_nvme_attach_controller" 00:20:40.441 }' 00:20:40.441 [2024-12-14 17:23:37.091932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:40.441 [2024-12-14 17:23:37.091988] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391755 ] 00:20:40.441 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.700 [2024-12-14 17:23:37.163600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:40.700 [2024-12-14 17:23:37.202045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.700 [2024-12-14 17:23:37.202140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.700 [2024-12-14 17:23:37.202142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.700 [2024-12-14 17:23:37.373314] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:40.700 [2024-12-14 17:23:37.373345] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:40.700 I/O targets: 00:20:40.700 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:40.700 00:20:40.700 00:20:40.700 CUnit - A unit testing framework for C - Version 2.1-3 00:20:40.700 http://cunit.sourceforge.net/ 00:20:40.700 00:20:40.700 00:20:40.700 Suite: bdevio tests on: Nvme1n1 00:20:40.700 Test: blockdev write read block ...passed 00:20:40.700 Test: blockdev write zeroes read block ...passed 00:20:41.009 Test: blockdev write zeroes read no split ...passed 00:20:41.009 Test: blockdev write zeroes read split ...passed 00:20:41.009 Test: blockdev write zeroes read split partial ...passed 00:20:41.009 Test: blockdev reset ...[2024-12-14 17:23:37.403179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.009 [2024-12-14 17:23:37.425918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:41.009 [2024-12-14 17:23:37.452569] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:41.009 passed 00:20:41.009 Test: blockdev write read 8 blocks ...passed 00:20:41.009 Test: blockdev write read size > 128k ...passed 00:20:41.009 Test: blockdev write read invalid size ...passed 00:20:41.009 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:41.009 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:41.009 Test: blockdev write read max offset ...passed 00:20:41.009 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:41.009 Test: blockdev writev readv 8 blocks ...passed 00:20:41.009 Test: blockdev writev readv 30 x 1block ...passed 00:20:41.009 Test: blockdev writev readv block ...passed 00:20:41.009 Test: blockdev writev readv size > 128k ...passed 00:20:41.009 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:41.009 Test: blockdev comparev and writev ...[2024-12-14 17:23:37.455480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.009 [2024-12-14 17:23:37.455518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.009 [2024-12-14 17:23:37.455531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.009 [2024-12-14 17:23:37.455540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.009 [2024-12-14 17:23:37.455702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.009 [2024-12-14 17:23:37.455713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:41.009 [2024-12-14 17:23:37.455723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.009 [2024-12-14 17:23:37.455733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:41.009 [2024-12-14 17:23:37.455877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.009 [2024-12-14 17:23:37.455888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:41.009 [2024-12-14 17:23:37.455901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.009 [2024-12-14 17:23:37.455910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:41.009 [2024-12-14 17:23:37.456071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.009 [2024-12-14 17:23:37.456082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:41.009 [2024-12-14 17:23:37.456092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.009 [2024-12-14 17:23:37.456101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:41.009 passed 00:20:41.009 Test: blockdev nvme passthru rw ...passed 00:20:41.009 Test: blockdev nvme passthru vendor specific ...[2024-12-14 17:23:37.456353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:41.009 [2024-12-14 17:23:37.456365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:41.009 [2024-12-14 17:23:37.456408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:41.009 [2024-12-14 17:23:37.456418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:41.009 [2024-12-14 17:23:37.456463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:41.010 [2024-12-14 17:23:37.456473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:41.010 [2024-12-14 17:23:37.456517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:41.010 [2024-12-14 17:23:37.456527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:41.010 passed 00:20:41.010 Test: blockdev nvme admin passthru ...passed 00:20:41.010 Test: blockdev copy ...passed 00:20:41.010 00:20:41.010 Run Summary: Type Total Ran Passed Failed Inactive 00:20:41.010 suites 1 1 n/a 0 0 00:20:41.010 tests 23 23 23 0 0 00:20:41.010 asserts 152 152 152 0 n/a 00:20:41.010 00:20:41.010 Elapsed time = 0.169 seconds 00:20:41.010 17:23:37 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.010 17:23:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.010 17:23:37 -- common/autotest_common.sh@10 -- # set +x 00:20:41.010 17:23:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.010 17:23:37 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:41.010 17:23:37 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:41.010 17:23:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:41.010 17:23:37 -- nvmf/common.sh@116 -- # sync 00:20:41.010 17:23:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:41.010 17:23:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:41.010 17:23:37 -- nvmf/common.sh@119 -- # set +e 00:20:41.010 17:23:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:41.010 17:23:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:41.010 rmmod nvme_rdma 00:20:41.010 rmmod nvme_fabrics 00:20:41.010 17:23:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:41.319 17:23:37 -- nvmf/common.sh@123 -- # set -e 00:20:41.319 17:23:37 -- nvmf/common.sh@124 -- # return 0 00:20:41.319 17:23:37 -- nvmf/common.sh@477 -- # '[' -n 1391469 ']' 00:20:41.319 17:23:37 -- nvmf/common.sh@478 -- # killprocess 1391469 00:20:41.319 17:23:37 -- common/autotest_common.sh@936 -- # '[' -z 1391469 ']' 00:20:41.319 17:23:37 -- common/autotest_common.sh@940 -- # kill -0 1391469 00:20:41.319 17:23:37 -- common/autotest_common.sh@941 -- # uname 00:20:41.319 17:23:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:41.319 17:23:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1391469 00:20:41.319 17:23:37 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:41.319 17:23:37 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:41.319 17:23:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1391469' 00:20:41.319 killing process with pid 1391469 00:20:41.319 17:23:37 -- common/autotest_common.sh@955 -- # kill 1391469 00:20:41.319 17:23:37 -- common/autotest_common.sh@960 -- # wait 1391469 00:20:41.578 17:23:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:41.578 17:23:38 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:41.578 00:20:41.578 real 0m8.996s 00:20:41.578 user 0m10.562s 00:20:41.578 sys 0m5.750s 00:20:41.578 17:23:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:41.578 17:23:38 -- common/autotest_common.sh@10 -- # set +x 00:20:41.578 ************************************ 00:20:41.578 END TEST nvmf_bdevio 00:20:41.578 ************************************ 00:20:41.578 17:23:38 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:20:41.578 17:23:38 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:41.578 17:23:38 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:41.579 17:23:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:41.579 17:23:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:41.579 17:23:38 -- common/autotest_common.sh@10 -- # set +x 00:20:41.579 ************************************ 00:20:41.579 START TEST nvmf_fuzz 00:20:41.579 ************************************ 00:20:41.579 17:23:38 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:41.579 * Looking for test storage... 00:20:41.579 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:41.579 17:23:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:41.579 17:23:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:41.579 17:23:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:41.579 17:23:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:41.579 17:23:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:41.579 17:23:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:41.579 17:23:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:41.579 17:23:38 -- scripts/common.sh@335 -- # IFS=.-: 00:20:41.579 17:23:38 -- scripts/common.sh@335 -- # read -ra ver1 00:20:41.579 17:23:38 -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.579 17:23:38 -- scripts/common.sh@336 -- # read -ra ver2 00:20:41.579 17:23:38 -- scripts/common.sh@337 -- # local 'op=<' 00:20:41.579 17:23:38 -- scripts/common.sh@339 -- # ver1_l=2 00:20:41.579 17:23:38 -- scripts/common.sh@340 -- # ver2_l=1 00:20:41.579 17:23:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:41.579 17:23:38 -- scripts/common.sh@343 -- # case "$op" in 00:20:41.579 17:23:38 -- scripts/common.sh@344 -- # : 1 00:20:41.579 17:23:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:41.579 17:23:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.579 17:23:38 -- scripts/common.sh@364 -- # decimal 1 00:20:41.579 17:23:38 -- scripts/common.sh@352 -- # local d=1 00:20:41.579 17:23:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.579 17:23:38 -- scripts/common.sh@354 -- # echo 1 00:20:41.579 17:23:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:41.579 17:23:38 -- scripts/common.sh@365 -- # decimal 2 00:20:41.579 17:23:38 -- scripts/common.sh@352 -- # local d=2 00:20:41.579 17:23:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.579 17:23:38 -- scripts/common.sh@354 -- # echo 2 00:20:41.579 17:23:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:41.579 17:23:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:41.579 17:23:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:41.579 17:23:38 -- scripts/common.sh@367 -- # return 0 00:20:41.579 17:23:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.579 17:23:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:41.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.579 --rc genhtml_branch_coverage=1 00:20:41.579 --rc genhtml_function_coverage=1 00:20:41.579 --rc genhtml_legend=1 00:20:41.579 --rc geninfo_all_blocks=1 00:20:41.579 --rc geninfo_unexecuted_blocks=1 00:20:41.579 00:20:41.579 ' 00:20:41.579 17:23:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:41.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.579 --rc genhtml_branch_coverage=1 00:20:41.579 --rc genhtml_function_coverage=1 00:20:41.579 --rc genhtml_legend=1 00:20:41.579 --rc geninfo_all_blocks=1 00:20:41.579 --rc geninfo_unexecuted_blocks=1 00:20:41.579 00:20:41.579 ' 00:20:41.579 17:23:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:41.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.579 --rc genhtml_branch_coverage=1 00:20:41.579 --rc genhtml_function_coverage=1 00:20:41.579 --rc genhtml_legend=1 00:20:41.579 --rc geninfo_all_blocks=1 00:20:41.579 --rc geninfo_unexecuted_blocks=1 00:20:41.579 00:20:41.579 ' 00:20:41.579 17:23:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:41.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.579 --rc genhtml_branch_coverage=1 00:20:41.579 --rc genhtml_function_coverage=1 00:20:41.579 --rc genhtml_legend=1 00:20:41.579 --rc geninfo_all_blocks=1 00:20:41.579 --rc geninfo_unexecuted_blocks=1 00:20:41.579 00:20:41.579 ' 00:20:41.579 17:23:38 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.579 17:23:38 -- nvmf/common.sh@7 -- # uname -s 00:20:41.839 17:23:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.839 17:23:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.839 17:23:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.839 17:23:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.839 17:23:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.839 17:23:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.839 17:23:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.839 17:23:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.839 17:23:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.839 17:23:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.839 17:23:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:41.839 17:23:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:41.839 17:23:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.839 17:23:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.839 17:23:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.839 17:23:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:41.839 17:23:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.839 17:23:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.839 17:23:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.839 17:23:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.839 17:23:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.839 17:23:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.839 17:23:38 -- paths/export.sh@5 -- # export PATH 00:20:41.839 17:23:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.839 17:23:38 -- nvmf/common.sh@46 -- # : 0 00:20:41.839 17:23:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:41.839 17:23:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:41.839 17:23:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:41.839 17:23:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.839 17:23:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.839 17:23:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:41.839 17:23:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:41.839 17:23:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:41.839 17:23:38 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:41.839 17:23:38 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:41.839 17:23:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.839 17:23:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:41.839 17:23:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:41.839 17:23:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:41.839 17:23:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.839 17:23:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.839 17:23:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.839 17:23:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:41.839 17:23:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:41.839 17:23:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:41.839 17:23:38 -- common/autotest_common.sh@10 -- # set +x 00:20:48.413 17:23:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:48.413 17:23:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:48.413 17:23:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:48.413 17:23:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:48.413 17:23:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:48.413 17:23:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:48.413 17:23:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:48.413 17:23:44 -- nvmf/common.sh@294 -- # net_devs=() 00:20:48.413 17:23:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:48.413 17:23:44 -- nvmf/common.sh@295 -- # e810=() 00:20:48.413 17:23:44 -- nvmf/common.sh@295 -- # local -ga e810 00:20:48.413 17:23:44 -- nvmf/common.sh@296 -- # x722=() 00:20:48.413 17:23:44 -- nvmf/common.sh@296 -- # local -ga x722 00:20:48.413 17:23:44 -- nvmf/common.sh@297 -- # mlx=() 00:20:48.413 17:23:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:48.413 17:23:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.413 17:23:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.413 17:23:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.413 17:23:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.413 17:23:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.413 17:23:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.413 17:23:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.413 17:23:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.413 17:23:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.413 17:23:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.413 17:23:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.413 17:23:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:48.413 17:23:44 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:48.413 17:23:44 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:48.413 17:23:44 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:48.413 17:23:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:48.413 17:23:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:48.413 17:23:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:48.413 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:48.413 17:23:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:48.413 17:23:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:48.413 17:23:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:48.413 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:48.413 17:23:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:48.413 17:23:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:48.413 17:23:44 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:48.413 17:23:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:48.413 17:23:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.413 17:23:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:48.413 17:23:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.413 17:23:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:48.413 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:48.413 17:23:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.413 17:23:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:48.414 17:23:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.414 17:23:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:48.414 17:23:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.414 17:23:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:48.414 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:48.414 17:23:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.414 17:23:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:48.414 17:23:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:48.414 17:23:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:48.414 17:23:44 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:48.414 17:23:44 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:48.414 17:23:44 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:48.414 17:23:44 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:48.414 17:23:44 -- nvmf/common.sh@57 -- # uname 00:20:48.414 17:23:44 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:48.414 17:23:44 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:48.414 17:23:44 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:48.414 17:23:44 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:48.414 17:23:44 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:48.414 17:23:44 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:48.414 17:23:44 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:48.414 17:23:44 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:48.414 17:23:44 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:48.414 17:23:44 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:48.414 17:23:44 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:48.414 17:23:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:48.414 17:23:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:48.414 17:23:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:48.414 17:23:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:48.414 17:23:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:48.414 17:23:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:48.414 17:23:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.414 17:23:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:48.414 17:23:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:48.414 17:23:44 -- nvmf/common.sh@104 -- # continue 2 00:20:48.414 17:23:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:48.414 17:23:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.414 17:23:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:48.414 17:23:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.414 17:23:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:48.414 17:23:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:48.414 17:23:44 -- nvmf/common.sh@104 -- # continue 2 00:20:48.414 17:23:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:48.414 17:23:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:48.414 17:23:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:48.414 17:23:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:48.414 17:23:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:48.414 17:23:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:48.414 17:23:44 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:48.414 17:23:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:48.414 17:23:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:48.414 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:48.414 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:48.414 altname enp217s0f0np0 00:20:48.414 altname ens818f0np0 00:20:48.414 inet 192.168.100.8/24 scope global mlx_0_0 00:20:48.414 valid_lft forever preferred_lft forever 00:20:48.414 17:23:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:48.414 17:23:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:48.414 17:23:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:48.414 17:23:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:48.414 17:23:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:48.414 17:23:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:48.414 17:23:44 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:48.414 17:23:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:48.414 17:23:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:48.414 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:48.414 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:48.414 altname enp217s0f1np1 00:20:48.414 altname ens818f1np1 00:20:48.414 inet 192.168.100.9/24 scope global mlx_0_1 00:20:48.414 valid_lft forever preferred_lft forever 00:20:48.414 17:23:44 -- nvmf/common.sh@410 -- # return 0 00:20:48.414 17:23:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:48.414 17:23:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:48.414 17:23:44 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:48.414 17:23:44 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:48.414 17:23:44 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:48.414 17:23:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:48.414 17:23:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:48.414 17:23:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:48.414 17:23:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:48.414 17:23:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:48.414 17:23:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:48.414 17:23:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.414 17:23:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:48.414 17:23:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:48.414 17:23:44 -- nvmf/common.sh@104 -- # continue 2 00:20:48.414 17:23:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:48.414 17:23:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.414 17:23:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:48.414 17:23:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.414 17:23:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:48.414 17:23:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:48.414 17:23:44 -- nvmf/common.sh@104 -- # continue 2 00:20:48.414 17:23:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:48.414 17:23:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:48.414 17:23:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:48.414 17:23:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:48.414 17:23:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:48.414 17:23:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:48.414 17:23:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:48.414 17:23:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:48.414 17:23:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:48.414 17:23:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:48.414 17:23:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:48.414 17:23:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:48.414 17:23:44 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:48.414 192.168.100.9' 00:20:48.414 17:23:44 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:48.414 192.168.100.9' 00:20:48.414 17:23:44 -- nvmf/common.sh@445 -- # head -n 1 00:20:48.414 17:23:44 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:48.414 17:23:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:48.414 192.168.100.9' 00:20:48.414 17:23:44 -- nvmf/common.sh@446 -- # head -n 1 00:20:48.414 17:23:44 -- nvmf/common.sh@446 -- # tail -n +2 00:20:48.414 17:23:44 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:48.414 17:23:44 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:48.414 17:23:44 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:48.414 17:23:44 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:48.414 17:23:44 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:48.414 17:23:44 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:48.414 17:23:44 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1395204 00:20:48.414 17:23:44 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:48.414 17:23:44 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:48.414 17:23:44 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1395204 00:20:48.414 17:23:44 -- common/autotest_common.sh@829 -- # '[' -z 1395204 ']' 00:20:48.414 17:23:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.414 17:23:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.414 17:23:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.414 17:23:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.414 17:23:44 -- common/autotest_common.sh@10 -- # set +x 00:20:49.352 17:23:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.352 17:23:45 -- common/autotest_common.sh@862 -- # return 0 00:20:49.352 17:23:45 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:49.352 17:23:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.352 17:23:45 -- common/autotest_common.sh@10 -- # set +x 00:20:49.352 17:23:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.352 17:23:45 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:49.352 17:23:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.352 17:23:45 -- common/autotest_common.sh@10 -- # set +x 00:20:49.352 Malloc0 00:20:49.352 17:23:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.352 17:23:45 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:49.352 17:23:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.352 17:23:45 -- common/autotest_common.sh@10 -- # set +x 00:20:49.352 17:23:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.352 17:23:45 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:49.352 17:23:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.352 17:23:45 -- common/autotest_common.sh@10 -- # set +x 00:20:49.352 17:23:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.352 17:23:45 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:49.352 17:23:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.352 17:23:45 -- common/autotest_common.sh@10 -- # set +x 00:20:49.352 17:23:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.352 17:23:45 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:20:49.352 17:23:45 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:21:21.441 Fuzzing completed. Shutting down the fuzz application 00:21:21.441 00:21:21.441 Dumping successful admin opcodes: 00:21:21.441 8, 9, 10, 24, 00:21:21.441 Dumping successful io opcodes: 00:21:21.441 0, 9, 00:21:21.441 NS: 0x200003af1f00 I/O qp, Total commands completed: 1092748, total successful commands: 6418, random_seed: 2887896384 00:21:21.441 NS: 0x200003af1f00 admin qp, Total commands completed: 137984, total successful commands: 1118, random_seed: 2028503872 00:21:21.441 17:24:16 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:21.441 Fuzzing completed. Shutting down the fuzz application 00:21:21.441 00:21:21.441 Dumping successful admin opcodes: 00:21:21.441 24, 00:21:21.441 Dumping successful io opcodes: 00:21:21.441 00:21:21.441 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 952153854 00:21:21.441 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 952230280 00:21:21.441 17:24:17 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:21.441 17:24:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.441 17:24:17 -- common/autotest_common.sh@10 -- # set +x 00:21:21.441 17:24:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.441 17:24:17 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:21.441 17:24:17 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:21.441 17:24:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:21.441 17:24:17 -- nvmf/common.sh@116 -- # sync 00:21:21.441 17:24:17 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:21.441 17:24:17 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:21.441 17:24:17 -- nvmf/common.sh@119 -- # set +e 00:21:21.441 17:24:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:21.441 17:24:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:21.441 rmmod nvme_rdma 00:21:21.441 rmmod nvme_fabrics 00:21:21.442 17:24:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:21.442 17:24:17 -- nvmf/common.sh@123 -- # set -e 00:21:21.442 17:24:17 -- nvmf/common.sh@124 -- # return 0 00:21:21.442 17:24:17 -- nvmf/common.sh@477 -- # '[' -n 1395204 ']' 00:21:21.442 17:24:17 -- nvmf/common.sh@478 -- # killprocess 1395204 00:21:21.442 17:24:17 -- common/autotest_common.sh@936 -- # '[' -z 1395204 ']' 00:21:21.442 17:24:17 -- common/autotest_common.sh@940 -- # kill -0 1395204 00:21:21.442 17:24:17 -- common/autotest_common.sh@941 -- # uname 00:21:21.442 17:24:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:21.442 17:24:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1395204 00:21:21.442 17:24:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:21.442 17:24:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:21.442 17:24:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1395204' 00:21:21.442 killing process with pid 1395204 00:21:21.442 17:24:17 -- common/autotest_common.sh@955 -- # kill 1395204 00:21:21.442 17:24:17 -- common/autotest_common.sh@960 -- # wait 1395204 00:21:21.442 17:24:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:21.442 17:24:17 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:21.442 17:24:17 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:21.442 00:21:21.442 real 0m39.890s 00:21:21.442 user 0m51.371s 00:21:21.442 sys 0m19.744s 00:21:21.442 17:24:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:21.442 17:24:17 -- common/autotest_common.sh@10 -- # set +x 00:21:21.442 ************************************ 00:21:21.442 END TEST nvmf_fuzz 00:21:21.442 ************************************ 00:21:21.442 17:24:18 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:21.442 17:24:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:21.442 17:24:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:21.442 17:24:18 -- common/autotest_common.sh@10 -- # set +x 00:21:21.442 ************************************ 00:21:21.442 START TEST nvmf_multiconnection 00:21:21.442 ************************************ 00:21:21.442 17:24:18 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:21.442 * Looking for test storage... 00:21:21.702 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:21.702 17:24:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:21.702 17:24:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:21.702 17:24:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:21.702 17:24:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:21.702 17:24:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:21.702 17:24:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:21.702 17:24:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:21.702 17:24:18 -- scripts/common.sh@335 -- # IFS=.-: 00:21:21.702 17:24:18 -- scripts/common.sh@335 -- # read -ra ver1 00:21:21.702 17:24:18 -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.702 17:24:18 -- scripts/common.sh@336 -- # read -ra ver2 00:21:21.702 17:24:18 -- scripts/common.sh@337 -- # local 'op=<' 00:21:21.702 17:24:18 -- scripts/common.sh@339 -- # ver1_l=2 00:21:21.702 17:24:18 -- scripts/common.sh@340 -- # ver2_l=1 00:21:21.702 17:24:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:21.702 17:24:18 -- scripts/common.sh@343 -- # case "$op" in 00:21:21.702 17:24:18 -- scripts/common.sh@344 -- # : 1 00:21:21.702 17:24:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:21.702 17:24:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.702 17:24:18 -- scripts/common.sh@364 -- # decimal 1 00:21:21.702 17:24:18 -- scripts/common.sh@352 -- # local d=1 00:21:21.702 17:24:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.702 17:24:18 -- scripts/common.sh@354 -- # echo 1 00:21:21.702 17:24:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:21.702 17:24:18 -- scripts/common.sh@365 -- # decimal 2 00:21:21.702 17:24:18 -- scripts/common.sh@352 -- # local d=2 00:21:21.702 17:24:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.702 17:24:18 -- scripts/common.sh@354 -- # echo 2 00:21:21.702 17:24:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:21.702 17:24:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:21.702 17:24:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:21.702 17:24:18 -- scripts/common.sh@367 -- # return 0 00:21:21.702 17:24:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.702 17:24:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:21.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.702 --rc genhtml_branch_coverage=1 00:21:21.702 --rc genhtml_function_coverage=1 00:21:21.702 --rc genhtml_legend=1 00:21:21.702 --rc geninfo_all_blocks=1 00:21:21.702 --rc geninfo_unexecuted_blocks=1 00:21:21.702 00:21:21.702 ' 00:21:21.702 17:24:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:21.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.702 --rc genhtml_branch_coverage=1 00:21:21.702 --rc genhtml_function_coverage=1 00:21:21.702 --rc genhtml_legend=1 00:21:21.702 --rc geninfo_all_blocks=1 00:21:21.702 --rc geninfo_unexecuted_blocks=1 00:21:21.702 00:21:21.702 ' 00:21:21.702 17:24:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:21.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.702 --rc genhtml_branch_coverage=1 00:21:21.702 --rc genhtml_function_coverage=1 00:21:21.702 --rc genhtml_legend=1 00:21:21.702 --rc geninfo_all_blocks=1 00:21:21.702 --rc geninfo_unexecuted_blocks=1 00:21:21.702 00:21:21.702 ' 00:21:21.702 17:24:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:21.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.702 --rc genhtml_branch_coverage=1 00:21:21.702 --rc genhtml_function_coverage=1 00:21:21.702 --rc genhtml_legend=1 00:21:21.702 --rc geninfo_all_blocks=1 00:21:21.702 --rc geninfo_unexecuted_blocks=1 00:21:21.702 00:21:21.702 ' 00:21:21.702 17:24:18 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.702 17:24:18 -- nvmf/common.sh@7 -- # uname -s 00:21:21.702 17:24:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.702 17:24:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.702 17:24:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.702 17:24:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.702 17:24:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.702 17:24:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.702 17:24:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.702 17:24:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.702 17:24:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.702 17:24:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.702 17:24:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:21.702 17:24:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:21.702 17:24:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.702 17:24:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.702 17:24:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.702 17:24:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:21.703 17:24:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.703 17:24:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.703 17:24:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.703 17:24:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.703 17:24:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.703 17:24:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.703 17:24:18 -- paths/export.sh@5 -- # export PATH 00:21:21.703 17:24:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.703 17:24:18 -- nvmf/common.sh@46 -- # : 0 00:21:21.703 17:24:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:21.703 17:24:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:21.703 17:24:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:21.703 17:24:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.703 17:24:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.703 17:24:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:21.703 17:24:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:21.703 17:24:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:21.703 17:24:18 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.703 17:24:18 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.703 17:24:18 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:21.703 17:24:18 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:21.703 17:24:18 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:21.703 17:24:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.703 17:24:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:21.703 17:24:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:21.703 17:24:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:21.703 17:24:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.703 17:24:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.703 17:24:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.703 17:24:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:21.703 17:24:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:21.703 17:24:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:21.703 17:24:18 -- common/autotest_common.sh@10 -- # set +x 00:21:29.831 17:24:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:29.831 17:24:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:29.831 17:24:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:29.831 17:24:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:29.831 17:24:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:29.831 17:24:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:29.831 17:24:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:29.831 17:24:25 -- nvmf/common.sh@294 -- # net_devs=() 00:21:29.831 17:24:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:29.831 17:24:25 -- nvmf/common.sh@295 -- # e810=() 00:21:29.831 17:24:25 -- nvmf/common.sh@295 -- # local -ga e810 00:21:29.831 17:24:25 -- nvmf/common.sh@296 -- # x722=() 00:21:29.831 17:24:25 -- nvmf/common.sh@296 -- # local -ga x722 00:21:29.831 17:24:25 -- nvmf/common.sh@297 -- # mlx=() 00:21:29.831 17:24:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:29.831 17:24:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.831 17:24:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.831 17:24:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.831 17:24:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.831 17:24:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.831 17:24:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.831 17:24:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.831 17:24:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.831 17:24:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.831 17:24:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.831 17:24:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.831 17:24:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:29.831 17:24:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:29.831 17:24:25 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:29.831 17:24:25 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:21:29.831 17:24:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:29.831 17:24:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:29.831 17:24:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:29.831 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:29.831 17:24:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.831 17:24:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:29.831 17:24:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:29.831 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:29.831 17:24:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.831 17:24:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:29.831 17:24:25 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:29.831 17:24:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.831 17:24:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:29.831 17:24:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.831 17:24:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:29.831 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:29.831 17:24:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.831 17:24:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:29.831 17:24:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.831 17:24:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:29.831 17:24:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.831 17:24:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:29.831 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:29.831 17:24:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.831 17:24:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:29.831 17:24:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:29.831 17:24:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:29.831 17:24:25 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:29.832 17:24:25 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:29.832 17:24:25 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:29.832 17:24:25 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:29.832 17:24:25 -- nvmf/common.sh@57 -- # uname 00:21:29.832 17:24:25 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:29.832 17:24:25 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:29.832 17:24:25 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:29.832 17:24:25 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:29.832 17:24:25 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:29.832 17:24:25 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:29.832 17:24:25 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:29.832 17:24:25 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:29.832 17:24:25 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:29.832 17:24:25 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:29.832 17:24:25 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:29.832 17:24:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:29.832 17:24:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:29.832 17:24:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:29.832 17:24:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.832 17:24:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:29.832 17:24:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:29.832 17:24:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.832 17:24:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.832 17:24:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:29.832 17:24:25 -- nvmf/common.sh@104 -- # continue 2 00:21:29.832 17:24:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:29.832 17:24:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.832 17:24:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.832 17:24:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.832 17:24:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.832 17:24:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:29.832 17:24:25 -- nvmf/common.sh@104 -- # continue 2 00:21:29.832 17:24:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:29.832 17:24:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:21:29.832 17:24:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:29.832 17:24:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:29.832 17:24:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:29.832 17:24:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:29.832 17:24:25 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:29.832 17:24:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:29.832 17:24:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:21:29.832 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:29.832 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:29.832 altname enp217s0f0np0 00:21:29.832 altname ens818f0np0 00:21:29.832 inet 192.168.100.8/24 scope global mlx_0_0 00:21:29.832 valid_lft forever preferred_lft forever 00:21:29.832 17:24:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:29.832 17:24:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:21:29.832 17:24:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:29.832 17:24:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:29.832 17:24:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:29.832 17:24:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:29.832 17:24:25 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:29.832 17:24:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:29.832 17:24:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:21:29.832 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:29.832 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:29.832 altname enp217s0f1np1 00:21:29.832 altname ens818f1np1 00:21:29.832 inet 192.168.100.9/24 scope global mlx_0_1 00:21:29.832 valid_lft forever preferred_lft forever 00:21:29.832 17:24:25 -- nvmf/common.sh@410 -- # return 0 00:21:29.832 17:24:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:29.832 17:24:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:29.832 17:24:25 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:29.832 17:24:25 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:29.832 17:24:25 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:29.832 17:24:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:29.832 17:24:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:29.832 17:24:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:29.832 17:24:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.832 17:24:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:29.832 17:24:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:29.832 17:24:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.832 17:24:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.832 17:24:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:29.832 17:24:25 -- nvmf/common.sh@104 -- # continue 2 00:21:29.832 17:24:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:29.832 17:24:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.832 17:24:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.832 17:24:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.832 17:24:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.832 17:24:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:29.832 17:24:25 -- nvmf/common.sh@104 -- # continue 2 00:21:29.832 17:24:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:29.832 17:24:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:21:29.832 17:24:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:29.832 17:24:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:29.832 17:24:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:29.832 17:24:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:29.832 17:24:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:29.832 17:24:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:21:29.832 17:24:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:29.832 17:24:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:29.832 17:24:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:29.832 17:24:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:29.832 17:24:25 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:29.832 192.168.100.9' 00:21:29.832 17:24:25 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:29.832 192.168.100.9' 00:21:29.832 17:24:25 -- nvmf/common.sh@445 -- # head -n 1 00:21:29.832 17:24:25 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:29.832 17:24:25 -- nvmf/common.sh@446 -- # tail -n +2 00:21:29.832 17:24:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:29.832 192.168.100.9' 00:21:29.832 17:24:25 -- nvmf/common.sh@446 -- # head -n 1 00:21:29.832 17:24:25 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:29.832 17:24:25 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:29.832 17:24:25 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:29.832 17:24:25 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:29.832 17:24:25 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:29.832 17:24:25 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:29.832 17:24:25 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:29.832 17:24:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:29.832 17:24:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.832 17:24:25 -- common/autotest_common.sh@10 -- # set +x 00:21:29.832 17:24:25 -- nvmf/common.sh@469 -- # nvmfpid=1404707 00:21:29.832 17:24:25 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:29.832 17:24:25 -- nvmf/common.sh@470 -- # waitforlisten 1404707 00:21:29.832 17:24:25 -- common/autotest_common.sh@829 -- # '[' -z 1404707 ']' 00:21:29.832 17:24:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.832 17:24:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.832 17:24:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.832 17:24:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.832 17:24:25 -- common/autotest_common.sh@10 -- # set +x 00:21:29.832 [2024-12-14 17:24:25.312597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:29.832 [2024-12-14 17:24:25.312645] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.832 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.832 [2024-12-14 17:24:25.383708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.832 [2024-12-14 17:24:25.422992] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:29.832 [2024-12-14 17:24:25.423104] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.832 [2024-12-14 17:24:25.423115] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.832 [2024-12-14 17:24:25.423123] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.832 [2024-12-14 17:24:25.423171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.832 [2024-12-14 17:24:25.423282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.832 [2024-12-14 17:24:25.423345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.832 [2024-12-14 17:24:25.423347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.832 17:24:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.832 17:24:26 -- common/autotest_common.sh@862 -- # return 0 00:21:29.832 17:24:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:29.832 17:24:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:29.832 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.832 17:24:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.832 17:24:26 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:29.832 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.832 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.832 [2024-12-14 17:24:26.208117] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x67e0d0/0x6825a0) succeed. 00:21:29.832 [2024-12-14 17:24:26.217445] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x67f670/0x6c3c40) succeed. 00:21:29.832 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.832 17:24:26 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:29.832 17:24:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:29.832 17:24:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:29.832 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.832 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.832 Malloc1 00:21:29.832 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 [2024-12-14 17:24:26.395766] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:29.833 17:24:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 Malloc2 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:29.833 17:24:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 Malloc3 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:29.833 17:24:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 Malloc4 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:29.833 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.833 17:24:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:29.833 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.833 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.093 17:24:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 Malloc5 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.093 17:24:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 Malloc6 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.093 17:24:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 Malloc7 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.093 17:24:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 Malloc8 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.093 17:24:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 Malloc9 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.093 17:24:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.093 Malloc10 00:21:30.093 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.093 17:24:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:30.093 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.093 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.353 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.353 17:24:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:30.353 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.353 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.353 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.353 17:24:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:21:30.353 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.353 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.353 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.353 17:24:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.353 17:24:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:30.353 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.353 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.353 Malloc11 00:21:30.353 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.353 17:24:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:30.353 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.353 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.353 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.353 17:24:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:30.353 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.353 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.353 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.353 17:24:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:21:30.353 17:24:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.353 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.353 17:24:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.353 17:24:26 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:30.353 17:24:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.353 17:24:26 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:31.290 17:24:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:31.290 17:24:27 -- common/autotest_common.sh@1187 -- # local i=0 00:21:31.290 17:24:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:31.290 17:24:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:31.290 17:24:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:33.197 17:24:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:33.197 17:24:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:33.197 17:24:29 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:21:33.197 17:24:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:33.197 17:24:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:33.197 17:24:29 -- common/autotest_common.sh@1197 -- # return 0 00:21:33.197 17:24:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:33.197 17:24:29 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:34.576 17:24:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:34.576 17:24:30 -- common/autotest_common.sh@1187 -- # local i=0 00:21:34.576 17:24:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:34.576 17:24:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:34.576 17:24:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:36.483 17:24:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:36.483 17:24:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:36.483 17:24:32 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:21:36.483 17:24:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:36.483 17:24:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:36.483 17:24:32 -- common/autotest_common.sh@1197 -- # return 0 00:21:36.483 17:24:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:36.483 17:24:32 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:21:37.421 17:24:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:37.421 17:24:33 -- common/autotest_common.sh@1187 -- # local i=0 00:21:37.421 17:24:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:37.421 17:24:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:37.421 17:24:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:39.326 17:24:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:39.326 17:24:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:39.326 17:24:35 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:21:39.326 17:24:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:39.326 17:24:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:39.326 17:24:35 -- common/autotest_common.sh@1197 -- # return 0 00:21:39.326 17:24:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:39.326 17:24:35 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:21:40.269 17:24:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:40.269 17:24:36 -- common/autotest_common.sh@1187 -- # local i=0 00:21:40.269 17:24:36 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:40.269 17:24:36 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:40.269 17:24:36 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:42.805 17:24:38 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:42.805 17:24:38 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:42.805 17:24:38 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:21:42.805 17:24:38 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:42.805 17:24:38 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:42.805 17:24:38 -- common/autotest_common.sh@1197 -- # return 0 00:21:42.805 17:24:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:42.805 17:24:38 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:21:43.373 17:24:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:43.373 17:24:39 -- common/autotest_common.sh@1187 -- # local i=0 00:21:43.373 17:24:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:43.373 17:24:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:43.373 17:24:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:45.283 17:24:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:45.283 17:24:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:45.283 17:24:41 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:21:45.283 17:24:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:45.283 17:24:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:45.283 17:24:41 -- common/autotest_common.sh@1197 -- # return 0 00:21:45.283 17:24:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:45.283 17:24:41 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:21:46.323 17:24:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:46.323 17:24:42 -- common/autotest_common.sh@1187 -- # local i=0 00:21:46.323 17:24:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:46.323 17:24:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:46.323 17:24:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:48.229 17:24:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:48.229 17:24:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:48.229 17:24:44 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:21:48.488 17:24:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:48.488 17:24:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:48.488 17:24:44 -- common/autotest_common.sh@1197 -- # return 0 00:21:48.488 17:24:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.488 17:24:44 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:21:49.428 17:24:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:49.428 17:24:45 -- common/autotest_common.sh@1187 -- # local i=0 00:21:49.428 17:24:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:49.428 17:24:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:49.428 17:24:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:51.334 17:24:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:51.334 17:24:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:51.334 17:24:47 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:21:51.334 17:24:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:51.334 17:24:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:51.334 17:24:47 -- common/autotest_common.sh@1197 -- # return 0 00:21:51.334 17:24:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:51.334 17:24:47 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:21:52.272 17:24:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:52.272 17:24:48 -- common/autotest_common.sh@1187 -- # local i=0 00:21:52.272 17:24:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:52.272 17:24:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:52.272 17:24:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:54.809 17:24:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:54.809 17:24:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:54.809 17:24:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:21:54.809 17:24:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:54.809 17:24:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:54.809 17:24:50 -- common/autotest_common.sh@1197 -- # return 0 00:21:54.809 17:24:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:54.809 17:24:50 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:21:55.377 17:24:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:55.377 17:24:51 -- common/autotest_common.sh@1187 -- # local i=0 00:21:55.377 17:24:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:55.377 17:24:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:55.377 17:24:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:57.283 17:24:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:57.283 17:24:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:57.283 17:24:53 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:21:57.543 17:24:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:57.543 17:24:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:57.543 17:24:53 -- common/autotest_common.sh@1197 -- # return 0 00:21:57.543 17:24:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.543 17:24:53 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:21:58.483 17:24:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:58.483 17:24:54 -- common/autotest_common.sh@1187 -- # local i=0 00:21:58.483 17:24:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:58.483 17:24:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:58.483 17:24:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:00.392 17:24:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:00.392 17:24:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:00.392 17:24:56 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:22:00.392 17:24:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:00.392 17:24:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:00.392 17:24:57 -- common/autotest_common.sh@1197 -- # return 0 00:22:00.392 17:24:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.392 17:24:57 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:22:01.329 17:24:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:01.329 17:24:57 -- common/autotest_common.sh@1187 -- # local i=0 00:22:01.329 17:24:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:01.329 17:24:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:01.329 17:24:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:03.863 17:24:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:03.863 17:24:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:03.863 17:24:59 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:22:03.863 17:25:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:03.863 17:25:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:03.863 17:25:00 -- common/autotest_common.sh@1197 -- # return 0 00:22:03.863 17:25:00 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:03.863 [global] 00:22:03.863 thread=1 00:22:03.863 invalidate=1 00:22:03.863 rw=read 00:22:03.863 time_based=1 00:22:03.863 runtime=10 00:22:03.863 ioengine=libaio 00:22:03.863 direct=1 00:22:03.863 bs=262144 00:22:03.863 iodepth=64 00:22:03.863 norandommap=1 00:22:03.863 numjobs=1 00:22:03.863 00:22:03.863 [job0] 00:22:03.863 filename=/dev/nvme0n1 00:22:03.863 [job1] 00:22:03.863 filename=/dev/nvme10n1 00:22:03.863 [job2] 00:22:03.863 filename=/dev/nvme1n1 00:22:03.863 [job3] 00:22:03.863 filename=/dev/nvme2n1 00:22:03.863 [job4] 00:22:03.863 filename=/dev/nvme3n1 00:22:03.863 [job5] 00:22:03.863 filename=/dev/nvme4n1 00:22:03.863 [job6] 00:22:03.863 filename=/dev/nvme5n1 00:22:03.863 [job7] 00:22:03.863 filename=/dev/nvme6n1 00:22:03.863 [job8] 00:22:03.863 filename=/dev/nvme7n1 00:22:03.863 [job9] 00:22:03.863 filename=/dev/nvme8n1 00:22:03.863 [job10] 00:22:03.863 filename=/dev/nvme9n1 00:22:03.863 Could not set queue depth (nvme0n1) 00:22:03.863 Could not set queue depth (nvme10n1) 00:22:03.863 Could not set queue depth (nvme1n1) 00:22:03.863 Could not set queue depth (nvme2n1) 00:22:03.863 Could not set queue depth (nvme3n1) 00:22:03.863 Could not set queue depth (nvme4n1) 00:22:03.863 Could not set queue depth (nvme5n1) 00:22:03.864 Could not set queue depth (nvme6n1) 00:22:03.864 Could not set queue depth (nvme7n1) 00:22:03.864 Could not set queue depth (nvme8n1) 00:22:03.864 Could not set queue depth (nvme9n1) 00:22:04.122 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.122 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.122 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.122 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.122 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.122 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.122 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.122 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.122 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.122 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.122 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.122 fio-3.35 00:22:04.122 Starting 11 threads 00:22:16.338 00:22:16.338 job0: (groupid=0, jobs=1): err= 0: pid=1411187: Sat Dec 14 17:25:11 2024 00:22:16.338 read: IOPS=1141, BW=285MiB/s (299MB/s)(2866MiB/10044msec) 00:22:16.338 slat (usec): min=15, max=22594, avg=868.52, stdev=2230.89 00:22:16.338 clat (usec): min=13650, max=95722, avg=55156.15, stdev=11446.58 00:22:16.338 lat (usec): min=13904, max=97198, avg=56024.67, stdev=11775.33 00:22:16.338 clat percentiles (usec): 00:22:16.338 | 1.00th=[44303], 5.00th=[45351], 10.00th=[45876], 20.00th=[46400], 00:22:16.338 | 30.00th=[46924], 40.00th=[47973], 50.00th=[48497], 60.00th=[51643], 00:22:16.338 | 70.00th=[62129], 80.00th=[63701], 90.00th=[77071], 95.00th=[79168], 00:22:16.338 | 99.00th=[83362], 99.50th=[85459], 99.90th=[90702], 99.95th=[92799], 00:22:16.338 | 99.99th=[95945] 00:22:16.338 bw ( KiB/s): min=200704, max=342528, per=7.16%, avg=291786.90, stdev=51528.87, samples=20 00:22:16.338 iops : min= 784, max= 1338, avg=1139.75, stdev=201.30, samples=20 00:22:16.338 lat (msec) : 20=0.22%, 50=56.13%, 100=43.65% 00:22:16.338 cpu : usr=0.36%, sys=5.37%, ctx=2188, majf=0, minf=4097 00:22:16.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:16.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:16.338 issued rwts: total=11462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:16.338 job1: (groupid=0, jobs=1): err= 0: pid=1411188: Sat Dec 14 17:25:11 2024 00:22:16.338 read: IOPS=1217, BW=304MiB/s (319MB/s)(3057MiB/10043msec) 00:22:16.338 slat (usec): min=13, max=17023, avg=808.30, stdev=2035.03 00:22:16.338 clat (usec): min=13474, max=98592, avg=51709.41, stdev=12774.46 00:22:16.338 lat (usec): min=13746, max=98633, avg=52517.71, stdev=13085.10 00:22:16.338 clat percentiles (usec): 00:22:16.338 | 1.00th=[25560], 5.00th=[30278], 10.00th=[34341], 20.00th=[45876], 00:22:16.338 | 30.00th=[46400], 40.00th=[46924], 50.00th=[47973], 60.00th=[49021], 00:22:16.338 | 70.00th=[55313], 80.00th=[62653], 90.00th=[66847], 95.00th=[79168], 00:22:16.338 | 99.00th=[84411], 99.50th=[85459], 99.90th=[92799], 99.95th=[93848], 00:22:16.338 | 99.99th=[99091] 00:22:16.338 bw ( KiB/s): min=199168, max=486400, per=7.64%, avg=311329.80, stdev=71253.44, samples=20 00:22:16.338 iops : min= 778, max= 1900, avg=1216.10, stdev=278.28, samples=20 00:22:16.338 lat (msec) : 20=0.36%, 50=62.50%, 100=37.14% 00:22:16.338 cpu : usr=0.46%, sys=5.49%, ctx=2391, majf=0, minf=4097 00:22:16.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:16.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:16.338 issued rwts: total=12226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:16.338 job2: (groupid=0, jobs=1): err= 0: pid=1411189: Sat Dec 14 17:25:11 2024 00:22:16.338 read: IOPS=1139, BW=285MiB/s (299MB/s)(2863MiB/10044msec) 00:22:16.338 slat (usec): min=16, max=18069, avg=869.30, stdev=2266.24 00:22:16.338 clat (usec): min=13450, max=96469, avg=55214.12, stdev=11490.63 00:22:16.338 lat (usec): min=13725, max=96527, avg=56083.42, stdev=11817.18 00:22:16.338 clat percentiles (usec): 00:22:16.338 | 1.00th=[44303], 5.00th=[45351], 10.00th=[45876], 20.00th=[46400], 00:22:16.338 | 30.00th=[46924], 40.00th=[47973], 50.00th=[49021], 60.00th=[52167], 00:22:16.338 | 70.00th=[62129], 80.00th=[63701], 90.00th=[77071], 95.00th=[79168], 00:22:16.338 | 99.00th=[83362], 99.50th=[86508], 99.90th=[92799], 99.95th=[92799], 00:22:16.338 | 99.99th=[93848] 00:22:16.338 bw ( KiB/s): min=199168, max=344576, per=7.15%, avg=291479.70, stdev=51765.33, samples=20 00:22:16.338 iops : min= 778, max= 1346, avg=1138.55, stdev=202.22, samples=20 00:22:16.338 lat (msec) : 20=0.22%, 50=55.27%, 100=44.52% 00:22:16.338 cpu : usr=0.46%, sys=5.19%, ctx=2128, majf=0, minf=4097 00:22:16.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:16.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:16.338 issued rwts: total=11450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:16.338 job3: (groupid=0, jobs=1): err= 0: pid=1411190: Sat Dec 14 17:25:11 2024 00:22:16.338 read: IOPS=974, BW=244MiB/s (255MB/s)(2446MiB/10043msec) 00:22:16.338 slat (usec): min=13, max=41046, avg=995.93, stdev=2767.81 00:22:16.338 clat (msec): min=16, max=101, avg=64.62, stdev= 8.70 00:22:16.338 lat (msec): min=16, max=120, avg=65.62, stdev= 9.17 00:22:16.338 clat percentiles (msec): 00:22:16.338 | 1.00th=[ 32], 5.00th=[ 51], 10.00th=[ 62], 20.00th=[ 63], 00:22:16.338 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 64], 60.00th=[ 65], 00:22:16.338 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 79], 95.00th=[ 81], 00:22:16.339 | 99.00th=[ 85], 99.50th=[ 87], 99.90th=[ 92], 99.95th=[ 96], 00:22:16.339 | 99.99th=[ 103] 00:22:16.339 bw ( KiB/s): min=200192, max=298922, per=6.10%, avg=248776.50, stdev=20030.05, samples=20 00:22:16.339 iops : min= 782, max= 1167, avg=971.75, stdev=78.15, samples=20 00:22:16.339 lat (msec) : 20=0.33%, 50=4.61%, 100=95.04%, 250=0.02% 00:22:16.339 cpu : usr=0.37%, sys=4.15%, ctx=2064, majf=0, minf=4097 00:22:16.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:16.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:16.339 issued rwts: total=9782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:16.339 job4: (groupid=0, jobs=1): err= 0: pid=1411191: Sat Dec 14 17:25:11 2024 00:22:16.339 read: IOPS=1054, BW=264MiB/s (276MB/s)(2647MiB/10043msec) 00:22:16.339 slat (usec): min=14, max=33292, avg=910.79, stdev=2505.76 00:22:16.339 clat (msec): min=11, max=109, avg=59.74, stdev=11.10 00:22:16.339 lat (msec): min=11, max=114, avg=60.65, stdev=11.48 00:22:16.339 clat percentiles (msec): 00:22:16.339 | 1.00th=[ 38], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 48], 00:22:16.339 | 30.00th=[ 50], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:22:16.339 | 70.00th=[ 65], 80.00th=[ 66], 90.00th=[ 78], 95.00th=[ 80], 00:22:16.339 | 99.00th=[ 84], 99.50th=[ 86], 99.90th=[ 91], 99.95th=[ 106], 00:22:16.339 | 99.99th=[ 109] 00:22:16.339 bw ( KiB/s): min=203776, max=340480, per=6.61%, avg=269386.35, stdev=40064.75, samples=20 00:22:16.339 iops : min= 796, max= 1330, avg=1052.25, stdev=156.49, samples=20 00:22:16.339 lat (msec) : 20=0.43%, 50=30.60%, 100=68.91%, 250=0.06% 00:22:16.339 cpu : usr=0.57%, sys=4.84%, ctx=2217, majf=0, minf=4097 00:22:16.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:16.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:16.339 issued rwts: total=10587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:16.339 job5: (groupid=0, jobs=1): err= 0: pid=1411192: Sat Dec 14 17:25:11 2024 00:22:16.339 read: IOPS=1320, BW=330MiB/s (346MB/s)(3317MiB/10043msec) 00:22:16.339 slat (usec): min=12, max=13677, avg=745.06, stdev=1806.52 00:22:16.339 clat (usec): min=12248, max=95468, avg=47655.18, stdev=3635.36 00:22:16.339 lat (usec): min=12530, max=95510, avg=48400.24, stdev=3975.41 00:22:16.339 clat percentiles (usec): 00:22:16.339 | 1.00th=[33162], 5.00th=[45351], 10.00th=[45876], 20.00th=[45876], 00:22:16.339 | 30.00th=[46400], 40.00th=[46924], 50.00th=[47449], 60.00th=[47973], 00:22:16.339 | 70.00th=[48497], 80.00th=[49021], 90.00th=[50594], 95.00th=[52167], 00:22:16.339 | 99.00th=[56361], 99.50th=[58459], 99.90th=[84411], 99.95th=[87557], 00:22:16.339 | 99.99th=[95945] 00:22:16.339 bw ( KiB/s): min=317828, max=364032, per=8.29%, avg=337965.00, stdev=8379.16, samples=20 00:22:16.339 iops : min= 1241, max= 1422, avg=1320.15, stdev=32.80, samples=20 00:22:16.339 lat (msec) : 20=0.29%, 50=86.33%, 100=13.37% 00:22:16.339 cpu : usr=0.63%, sys=5.97%, ctx=2539, majf=0, minf=4097 00:22:16.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:16.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:16.339 issued rwts: total=13266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:16.339 job6: (groupid=0, jobs=1): err= 0: pid=1411193: Sat Dec 14 17:25:11 2024 00:22:16.339 read: IOPS=1317, BW=329MiB/s (345MB/s)(3309MiB/10043msec) 00:22:16.339 slat (usec): min=12, max=13811, avg=751.27, stdev=1808.89 00:22:16.339 clat (usec): min=12341, max=97171, avg=47757.36, stdev=3603.42 00:22:16.339 lat (usec): min=12611, max=97212, avg=48508.63, stdev=3918.14 00:22:16.339 clat percentiles (usec): 00:22:16.339 | 1.00th=[39584], 5.00th=[45351], 10.00th=[45876], 20.00th=[46400], 00:22:16.339 | 30.00th=[46400], 40.00th=[46924], 50.00th=[47449], 60.00th=[47973], 00:22:16.339 | 70.00th=[48497], 80.00th=[49021], 90.00th=[50594], 95.00th=[52167], 00:22:16.339 | 99.00th=[56361], 99.50th=[58459], 99.90th=[82314], 99.95th=[89654], 00:22:16.339 | 99.99th=[90702] 00:22:16.339 bw ( KiB/s): min=319361, max=349696, per=8.27%, avg=337196.85, stdev=6617.56, samples=20 00:22:16.339 iops : min= 1247, max= 1366, avg=1317.15, stdev=25.92, samples=20 00:22:16.339 lat (msec) : 20=0.29%, 50=86.31%, 100=13.40% 00:22:16.339 cpu : usr=0.78%, sys=5.85%, ctx=2492, majf=0, minf=4097 00:22:16.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:16.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:16.339 issued rwts: total=13236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:16.339 job7: (groupid=0, jobs=1): err= 0: pid=1411194: Sat Dec 14 17:25:11 2024 00:22:16.339 read: IOPS=1237, BW=309MiB/s (324MB/s)(3104MiB/10031msec) 00:22:16.339 slat (usec): min=15, max=22744, avg=791.86, stdev=1982.84 00:22:16.339 clat (usec): min=11377, max=99199, avg=50862.90, stdev=14180.95 00:22:16.339 lat (msec): min=11, max=102, avg=51.65, stdev=14.49 00:22:16.339 clat percentiles (usec): 00:22:16.339 | 1.00th=[25822], 5.00th=[30278], 10.00th=[31065], 20.00th=[32637], 00:22:16.339 | 30.00th=[45876], 40.00th=[47449], 50.00th=[49021], 60.00th=[61604], 00:22:16.339 | 70.00th=[62653], 80.00th=[63701], 90.00th=[65799], 95.00th=[68682], 00:22:16.339 | 99.00th=[79168], 99.50th=[79168], 99.90th=[82314], 99.95th=[83362], 00:22:16.339 | 99.99th=[99091] 00:22:16.339 bw ( KiB/s): min=231936, max=506880, per=7.76%, avg=316161.55, stdev=90649.16, samples=20 00:22:16.339 iops : min= 906, max= 1980, avg=1235.00, stdev=354.09, samples=20 00:22:16.339 lat (msec) : 20=0.64%, 50=51.49%, 100=47.88% 00:22:16.339 cpu : usr=0.62%, sys=5.64%, ctx=2385, majf=0, minf=3659 00:22:16.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:16.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:16.339 issued rwts: total=12415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:16.339 job8: (groupid=0, jobs=1): err= 0: pid=1411195: Sat Dec 14 17:25:11 2024 00:22:16.339 read: IOPS=1341, BW=335MiB/s (352MB/s)(3364MiB/10030msec) 00:22:16.339 slat (usec): min=12, max=24831, avg=739.68, stdev=1989.40 00:22:16.339 clat (usec): min=12379, max=86574, avg=46902.59, stdev=16403.21 00:22:16.339 lat (usec): min=12607, max=88854, avg=47642.27, stdev=16740.94 00:22:16.339 clat percentiles (usec): 00:22:16.339 | 1.00th=[13566], 5.00th=[14615], 10.00th=[28181], 20.00th=[31589], 00:22:16.339 | 30.00th=[33162], 40.00th=[46400], 50.00th=[47973], 60.00th=[52167], 00:22:16.339 | 70.00th=[62129], 80.00th=[63177], 90.00th=[64750], 95.00th=[66323], 00:22:16.339 | 99.00th=[70779], 99.50th=[72877], 99.90th=[78119], 99.95th=[81265], 00:22:16.339 | 99.99th=[86508] 00:22:16.339 bw ( KiB/s): min=247296, max=763904, per=8.41%, avg=342836.30, stdev=132686.44, samples=20 00:22:16.339 iops : min= 966, max= 2984, avg=1339.20, stdev=518.30, samples=20 00:22:16.339 lat (msec) : 20=8.88%, 50=49.10%, 100=42.02% 00:22:16.339 cpu : usr=0.45%, sys=5.59%, ctx=2510, majf=0, minf=4097 00:22:16.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:16.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:16.339 issued rwts: total=13457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:16.339 job9: (groupid=0, jobs=1): err= 0: pid=1411196: Sat Dec 14 17:25:11 2024 00:22:16.339 read: IOPS=3865, BW=966MiB/s (1013MB/s)(9703MiB/10042msec) 00:22:16.339 slat (usec): min=11, max=9481, avg=256.19, stdev=583.24 00:22:16.339 clat (usec): min=1835, max=83593, avg=16284.60, stdev=4305.25 00:22:16.339 lat (usec): min=1864, max=83639, avg=16540.79, stdev=4367.41 00:22:16.339 clat percentiles (usec): 00:22:16.339 | 1.00th=[13829], 5.00th=[14353], 10.00th=[14615], 20.00th=[14877], 00:22:16.339 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:22:16.339 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16712], 95.00th=[17433], 00:22:16.339 | 99.00th=[36439], 99.50th=[45351], 99.90th=[63177], 99.95th=[70779], 00:22:16.339 | 99.99th=[83362] 00:22:16.339 bw ( KiB/s): min=421556, max=1050624, per=24.33%, avg=991957.80, stdev=157530.00, samples=20 00:22:16.339 iops : min= 1646, max= 4104, avg=3874.80, stdev=615.49, samples=20 00:22:16.339 lat (msec) : 2=0.01%, 4=0.04%, 10=0.14%, 20=95.33%, 50=4.31% 00:22:16.339 lat (msec) : 100=0.18% 00:22:16.339 cpu : usr=0.52%, sys=8.79%, ctx=7659, majf=0, minf=4097 00:22:16.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:16.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:16.339 issued rwts: total=38813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:16.339 job10: (groupid=0, jobs=1): err= 0: pid=1411197: Sat Dec 14 17:25:11 2024 00:22:16.339 read: IOPS=1319, BW=330MiB/s (346MB/s)(3312MiB/10042msec) 00:22:16.339 slat (usec): min=13, max=15393, avg=750.49, stdev=1792.39 00:22:16.339 clat (usec): min=12433, max=92374, avg=47715.81, stdev=3444.32 00:22:16.339 lat (usec): min=12699, max=99054, avg=48466.30, stdev=3771.80 00:22:16.339 clat percentiles (usec): 00:22:16.339 | 1.00th=[38536], 5.00th=[45351], 10.00th=[45876], 20.00th=[46400], 00:22:16.339 | 30.00th=[46400], 40.00th=[46924], 50.00th=[47449], 60.00th=[47973], 00:22:16.339 | 70.00th=[48497], 80.00th=[49021], 90.00th=[50594], 95.00th=[52167], 00:22:16.339 | 99.00th=[56361], 99.50th=[58983], 99.90th=[72877], 99.95th=[87557], 00:22:16.339 | 99.99th=[92799] 00:22:16.339 bw ( KiB/s): min=320383, max=351232, per=8.28%, avg=337478.35, stdev=6020.73, samples=20 00:22:16.339 iops : min= 1251, max= 1372, avg=1318.25, stdev=23.59, samples=20 00:22:16.339 lat (msec) : 20=0.27%, 50=86.22%, 100=13.51% 00:22:16.339 cpu : usr=0.77%, sys=5.85%, ctx=2500, majf=0, minf=4097 00:22:16.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:16.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:16.339 issued rwts: total=13247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:16.339 00:22:16.339 Run status group 0 (all jobs): 00:22:16.339 READ: bw=3981MiB/s (4174MB/s), 244MiB/s-966MiB/s (255MB/s-1013MB/s), io=39.0GiB (41.9GB), run=10030-10044msec 00:22:16.339 00:22:16.339 Disk stats (read/write): 00:22:16.339 nvme0n1: ios=22489/0, merge=0/0, ticks=1220211/0, in_queue=1220211, util=96.67% 00:22:16.340 nvme10n1: ios=24008/0, merge=0/0, ticks=1221700/0, in_queue=1221700, util=96.89% 00:22:16.340 nvme1n1: ios=22489/0, merge=0/0, ticks=1219789/0, in_queue=1219789, util=97.24% 00:22:16.340 nvme2n1: ios=19118/0, merge=0/0, ticks=1221474/0, in_queue=1221474, util=97.42% 00:22:16.340 nvme3n1: ios=20780/0, merge=0/0, ticks=1223559/0, in_queue=1223559, util=97.53% 00:22:16.340 nvme4n1: ios=26133/0, merge=0/0, ticks=1221485/0, in_queue=1221485, util=97.93% 00:22:16.340 nvme5n1: ios=26071/0, merge=0/0, ticks=1220375/0, in_queue=1220375, util=98.12% 00:22:16.340 nvme6n1: ios=24284/0, merge=0/0, ticks=1223542/0, in_queue=1223542, util=98.27% 00:22:16.340 nvme7n1: ios=26353/0, merge=0/0, ticks=1222206/0, in_queue=1222206, util=98.75% 00:22:16.340 nvme8n1: ios=77202/0, merge=0/0, ticks=1211974/0, in_queue=1211974, util=98.98% 00:22:16.340 nvme9n1: ios=26104/0, merge=0/0, ticks=1221307/0, in_queue=1221307, util=99.15% 00:22:16.340 17:25:11 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:16.340 [global] 00:22:16.340 thread=1 00:22:16.340 invalidate=1 00:22:16.340 rw=randwrite 00:22:16.340 time_based=1 00:22:16.340 runtime=10 00:22:16.340 ioengine=libaio 00:22:16.340 direct=1 00:22:16.340 bs=262144 00:22:16.340 iodepth=64 00:22:16.340 norandommap=1 00:22:16.340 numjobs=1 00:22:16.340 00:22:16.340 [job0] 00:22:16.340 filename=/dev/nvme0n1 00:22:16.340 [job1] 00:22:16.340 filename=/dev/nvme10n1 00:22:16.340 [job2] 00:22:16.340 filename=/dev/nvme1n1 00:22:16.340 [job3] 00:22:16.340 filename=/dev/nvme2n1 00:22:16.340 [job4] 00:22:16.340 filename=/dev/nvme3n1 00:22:16.340 [job5] 00:22:16.340 filename=/dev/nvme4n1 00:22:16.340 [job6] 00:22:16.340 filename=/dev/nvme5n1 00:22:16.340 [job7] 00:22:16.340 filename=/dev/nvme6n1 00:22:16.340 [job8] 00:22:16.340 filename=/dev/nvme7n1 00:22:16.340 [job9] 00:22:16.340 filename=/dev/nvme8n1 00:22:16.340 [job10] 00:22:16.340 filename=/dev/nvme9n1 00:22:16.340 Could not set queue depth (nvme0n1) 00:22:16.340 Could not set queue depth (nvme10n1) 00:22:16.340 Could not set queue depth (nvme1n1) 00:22:16.340 Could not set queue depth (nvme2n1) 00:22:16.340 Could not set queue depth (nvme3n1) 00:22:16.340 Could not set queue depth (nvme4n1) 00:22:16.340 Could not set queue depth (nvme5n1) 00:22:16.340 Could not set queue depth (nvme6n1) 00:22:16.340 Could not set queue depth (nvme7n1) 00:22:16.340 Could not set queue depth (nvme8n1) 00:22:16.340 Could not set queue depth (nvme9n1) 00:22:16.340 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:16.340 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:16.340 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:16.340 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:16.340 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:16.340 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:16.340 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:16.340 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:16.340 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:16.340 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:16.340 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:16.340 fio-3.35 00:22:16.340 Starting 11 threads 00:22:26.326 00:22:26.326 job0: (groupid=0, jobs=1): err= 0: pid=1412936: Sat Dec 14 17:25:22 2024 00:22:26.326 write: IOPS=898, BW=225MiB/s (235MB/s)(2258MiB/10054msec); 0 zone resets 00:22:26.326 slat (usec): min=24, max=12115, avg=1089.32, stdev=2078.16 00:22:26.326 clat (msec): min=16, max=126, avg=70.14, stdev=14.84 00:22:26.326 lat (msec): min=16, max=126, avg=71.22, stdev=15.07 00:22:26.326 clat percentiles (msec): 00:22:26.326 | 1.00th=[ 42], 5.00th=[ 53], 10.00th=[ 53], 20.00th=[ 55], 00:22:26.326 | 30.00th=[ 57], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 75], 00:22:26.326 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 92], 95.00th=[ 95], 00:22:26.326 | 99.00th=[ 99], 99.50th=[ 101], 99.90th=[ 117], 99.95th=[ 121], 00:22:26.326 | 99.99th=[ 127] 00:22:26.326 bw ( KiB/s): min=173056, max=316416, per=6.55%, avg=229598.15, stdev=44530.36, samples=20 00:22:26.326 iops : min= 676, max= 1236, avg=896.85, stdev=173.97, samples=20 00:22:26.326 lat (msec) : 20=0.11%, 50=1.54%, 100=97.82%, 250=0.53% 00:22:26.326 cpu : usr=1.95%, sys=4.03%, ctx=2267, majf=0, minf=78 00:22:26.326 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:26.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:26.326 issued rwts: total=0,9031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.326 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:26.326 job1: (groupid=0, jobs=1): err= 0: pid=1412950: Sat Dec 14 17:25:22 2024 00:22:26.326 write: IOPS=931, BW=233MiB/s (244MB/s)(2340MiB/10043msec); 0 zone resets 00:22:26.326 slat (usec): min=24, max=19718, avg=1063.06, stdev=2093.22 00:22:26.326 clat (msec): min=8, max=124, avg=67.58, stdev=18.51 00:22:26.326 lat (msec): min=8, max=132, avg=68.65, stdev=18.81 00:22:26.326 clat percentiles (msec): 00:22:26.326 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 50], 20.00th=[ 55], 00:22:26.326 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 67], 60.00th=[ 72], 00:22:26.326 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 92], 95.00th=[ 99], 00:22:26.326 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 122], 00:22:26.326 | 99.99th=[ 125] 00:22:26.326 bw ( KiB/s): min=157696, max=429056, per=6.79%, avg=238025.15, stdev=64943.86, samples=20 00:22:26.326 iops : min= 616, max= 1676, avg=929.75, stdev=253.70, samples=20 00:22:26.326 lat (msec) : 10=0.04%, 20=0.15%, 50=10.27%, 100=85.29%, 250=4.25% 00:22:26.326 cpu : usr=2.25%, sys=4.00%, ctx=2326, majf=0, minf=203 00:22:26.326 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:26.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:26.326 issued rwts: total=0,9360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.326 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:26.326 job2: (groupid=0, jobs=1): err= 0: pid=1412951: Sat Dec 14 17:25:22 2024 00:22:26.326 write: IOPS=1140, BW=285MiB/s (299MB/s)(2866MiB/10053msec); 0 zone resets 00:22:26.326 slat (usec): min=18, max=54100, avg=839.02, stdev=1975.04 00:22:26.326 clat (msec): min=8, max=131, avg=55.26, stdev=29.37 00:22:26.326 lat (msec): min=8, max=131, avg=56.10, stdev=29.81 00:22:26.326 clat percentiles (msec): 00:22:26.326 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 19], 00:22:26.326 | 30.00th=[ 20], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 72], 00:22:26.326 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 91], 95.00th=[ 95], 00:22:26.326 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 123], 99.95th=[ 128], 00:22:26.326 | 99.99th=[ 132] 00:22:26.326 bw ( KiB/s): min=158720, max=880128, per=8.32%, avg=291887.50, stdev=209306.11, samples=20 00:22:26.326 iops : min= 620, max= 3438, avg=1140.15, stdev=817.62, samples=20 00:22:26.326 lat (msec) : 10=0.07%, 20=30.21%, 50=9.55%, 100=56.68%, 250=3.49% 00:22:26.326 cpu : usr=2.39%, sys=3.88%, ctx=2731, majf=0, minf=18 00:22:26.326 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:26.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:26.326 issued rwts: total=0,11464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.326 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:26.326 job3: (groupid=0, jobs=1): err= 0: pid=1412952: Sat Dec 14 17:25:22 2024 00:22:26.326 write: IOPS=2640, BW=660MiB/s (692MB/s)(6622MiB/10031msec); 0 zone resets 00:22:26.326 slat (usec): min=16, max=13861, avg=373.68, stdev=863.71 00:22:26.326 clat (usec): min=915, max=80147, avg=23852.73, stdev=12865.89 00:22:26.326 lat (usec): min=972, max=80214, avg=24226.40, stdev=13069.05 00:22:26.326 clat percentiles (usec): 00:22:26.327 | 1.00th=[15401], 5.00th=[16188], 10.00th=[16581], 20.00th=[16909], 00:22:26.327 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:22:26.327 | 70.00th=[18482], 80.00th=[33162], 90.00th=[52167], 95.00th=[54264], 00:22:26.327 | 99.00th=[58459], 99.50th=[60031], 99.90th=[69731], 99.95th=[70779], 00:22:26.327 | 99.99th=[77071] 00:22:26.327 bw ( KiB/s): min=285696, max=922112, per=19.29%, avg=676597.05, stdev=284771.27, samples=20 00:22:26.327 iops : min= 1116, max= 3602, avg=2642.95, stdev=1112.38, samples=20 00:22:26.327 lat (usec) : 1000=0.01% 00:22:26.327 lat (msec) : 2=0.02%, 4=0.09%, 10=0.22%, 20=77.66%, 50=9.68% 00:22:26.327 lat (msec) : 100=12.32% 00:22:26.327 cpu : usr=3.81%, sys=6.02%, ctx=5801, majf=0, minf=142 00:22:26.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:26.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:26.327 issued rwts: total=0,26489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:26.327 job4: (groupid=0, jobs=1): err= 0: pid=1412953: Sat Dec 14 17:25:22 2024 00:22:26.327 write: IOPS=1055, BW=264MiB/s (277MB/s)(2647MiB/10030msec); 0 zone resets 00:22:26.327 slat (usec): min=23, max=21492, avg=923.96, stdev=1953.24 00:22:26.327 clat (msec): min=4, max=120, avg=59.69, stdev=23.14 00:22:26.327 lat (msec): min=4, max=121, avg=60.61, stdev=23.52 00:22:26.327 clat percentiles (msec): 00:22:26.327 | 1.00th=[ 19], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:22:26.327 | 30.00th=[ 38], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 70], 00:22:26.327 | 70.00th=[ 74], 80.00th=[ 87], 90.00th=[ 91], 95.00th=[ 97], 00:22:26.327 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 118], 99.95th=[ 120], 00:22:26.327 | 99.99th=[ 121] 00:22:26.327 bw ( KiB/s): min=157184, max=444928, per=7.68%, avg=269436.55, stdev=105107.51, samples=20 00:22:26.327 iops : min= 614, max= 1738, avg=1052.45, stdev=410.59, samples=20 00:22:26.327 lat (msec) : 10=0.19%, 20=0.97%, 50=38.90%, 100=56.33%, 250=3.61% 00:22:26.327 cpu : usr=2.63%, sys=4.17%, ctx=2682, majf=0, minf=75 00:22:26.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:26.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:26.327 issued rwts: total=0,10587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:26.327 job5: (groupid=0, jobs=1): err= 0: pid=1412954: Sat Dec 14 17:25:22 2024 00:22:26.327 write: IOPS=890, BW=223MiB/s (234MB/s)(2239MiB/10054msec); 0 zone resets 00:22:26.327 slat (usec): min=28, max=20820, avg=1110.92, stdev=1987.85 00:22:26.327 clat (msec): min=25, max=123, avg=70.70, stdev=13.93 00:22:26.327 lat (msec): min=25, max=123, avg=71.81, stdev=14.11 00:22:26.327 clat percentiles (msec): 00:22:26.327 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:22:26.327 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:22:26.327 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 92], 95.00th=[ 94], 00:22:26.327 | 99.00th=[ 97], 99.50th=[ 101], 99.90th=[ 114], 99.95th=[ 118], 00:22:26.327 | 99.99th=[ 124] 00:22:26.327 bw ( KiB/s): min=171520, max=291328, per=6.49%, avg=227703.75, stdev=41267.62, samples=20 00:22:26.327 iops : min= 670, max= 1138, avg=889.45, stdev=161.23, samples=20 00:22:26.327 lat (msec) : 50=0.30%, 100=99.08%, 250=0.61% 00:22:26.327 cpu : usr=2.28%, sys=4.26%, ctx=2213, majf=0, minf=75 00:22:26.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:26.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:26.327 issued rwts: total=0,8957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:26.327 job6: (groupid=0, jobs=1): err= 0: pid=1412955: Sat Dec 14 17:25:22 2024 00:22:26.327 write: IOPS=1014, BW=254MiB/s (266MB/s)(2545MiB/10031msec); 0 zone resets 00:22:26.327 slat (usec): min=21, max=58491, avg=950.19, stdev=2179.54 00:22:26.327 clat (msec): min=4, max=166, avg=62.09, stdev=24.32 00:22:26.327 lat (msec): min=4, max=166, avg=63.04, stdev=24.72 00:22:26.327 clat percentiles (msec): 00:22:26.327 | 1.00th=[ 18], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 37], 00:22:26.327 | 30.00th=[ 38], 40.00th=[ 53], 50.00th=[ 68], 60.00th=[ 73], 00:22:26.327 | 70.00th=[ 78], 80.00th=[ 88], 90.00th=[ 93], 95.00th=[ 97], 00:22:26.327 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 144], 00:22:26.327 | 99.99th=[ 167] 00:22:26.327 bw ( KiB/s): min=160768, max=442368, per=7.39%, avg=259038.25, stdev=94673.12, samples=20 00:22:26.327 iops : min= 628, max= 1728, avg=1011.85, stdev=369.83, samples=20 00:22:26.327 lat (msec) : 10=0.23%, 20=1.22%, 50=35.25%, 100=59.19%, 250=4.12% 00:22:26.327 cpu : usr=2.52%, sys=3.65%, ctx=2564, majf=0, minf=81 00:22:26.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:26.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:26.327 issued rwts: total=0,10181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:26.327 job7: (groupid=0, jobs=1): err= 0: pid=1412957: Sat Dec 14 17:25:22 2024 00:22:26.327 write: IOPS=891, BW=223MiB/s (234MB/s)(2242MiB/10055msec); 0 zone resets 00:22:26.327 slat (usec): min=25, max=11020, avg=1109.63, stdev=1989.49 00:22:26.327 clat (msec): min=3, max=126, avg=70.64, stdev=14.39 00:22:26.327 lat (msec): min=3, max=126, avg=71.75, stdev=14.57 00:22:26.327 clat percentiles (msec): 00:22:26.327 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:22:26.327 | 30.00th=[ 58], 40.00th=[ 67], 50.00th=[ 73], 60.00th=[ 75], 00:22:26.327 | 70.00th=[ 78], 80.00th=[ 86], 90.00th=[ 92], 95.00th=[ 95], 00:22:26.327 | 99.00th=[ 99], 99.50th=[ 100], 99.90th=[ 117], 99.95th=[ 123], 00:22:26.327 | 99.99th=[ 127] 00:22:26.327 bw ( KiB/s): min=171520, max=292352, per=6.50%, avg=227899.45, stdev=41369.72, samples=20 00:22:26.327 iops : min= 670, max= 1142, avg=890.20, stdev=161.65, samples=20 00:22:26.327 lat (msec) : 4=0.03%, 10=0.08%, 20=0.16%, 50=0.40%, 100=98.94% 00:22:26.327 lat (msec) : 250=0.39% 00:22:26.327 cpu : usr=2.17%, sys=4.09%, ctx=2226, majf=0, minf=74 00:22:26.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:26.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:26.327 issued rwts: total=0,8966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:26.327 job8: (groupid=0, jobs=1): err= 0: pid=1412959: Sat Dec 14 17:25:22 2024 00:22:26.327 write: IOPS=2450, BW=613MiB/s (642MB/s)(6145MiB/10030msec); 0 zone resets 00:22:26.327 slat (usec): min=18, max=6098, avg=404.52, stdev=826.52 00:22:26.327 clat (usec): min=9293, max=63528, avg=25702.51, stdev=11871.15 00:22:26.327 lat (usec): min=9332, max=64218, avg=26107.02, stdev=12045.75 00:22:26.327 clat percentiles (usec): 00:22:26.327 | 1.00th=[15926], 5.00th=[16581], 10.00th=[16909], 20.00th=[17171], 00:22:26.327 | 30.00th=[17433], 40.00th=[17957], 50.00th=[18220], 60.00th=[18744], 00:22:26.327 | 70.00th=[34341], 80.00th=[36439], 90.00th=[38536], 95.00th=[53740], 00:22:26.327 | 99.00th=[56886], 99.50th=[57934], 99.90th=[59507], 99.95th=[60556], 00:22:26.327 | 99.99th=[62129] 00:22:26.327 bw ( KiB/s): min=291328, max=927232, per=17.90%, avg=627701.30, stdev=247018.17, samples=20 00:22:26.327 iops : min= 1138, max= 3622, avg=2451.95, stdev=964.90, samples=20 00:22:26.327 lat (msec) : 10=0.02%, 20=63.31%, 50=28.85%, 100=7.82% 00:22:26.327 cpu : usr=3.95%, sys=6.13%, ctx=5235, majf=0, minf=76 00:22:26.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:22:26.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:26.327 issued rwts: total=0,24579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:26.327 job9: (groupid=0, jobs=1): err= 0: pid=1412961: Sat Dec 14 17:25:22 2024 00:22:26.327 write: IOPS=937, BW=234MiB/s (246MB/s)(2353MiB/10044msec); 0 zone resets 00:22:26.327 slat (usec): min=22, max=18044, avg=1055.63, stdev=2079.47 00:22:26.327 clat (msec): min=5, max=125, avg=67.21, stdev=18.24 00:22:26.327 lat (msec): min=5, max=125, avg=68.27, stdev=18.55 00:22:26.327 clat percentiles (msec): 00:22:26.327 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 50], 20.00th=[ 55], 00:22:26.327 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 66], 60.00th=[ 72], 00:22:26.327 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 91], 95.00th=[ 97], 00:22:26.327 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 117], 99.95th=[ 121], 00:22:26.327 | 99.99th=[ 126] 00:22:26.327 bw ( KiB/s): min=160256, max=434176, per=6.83%, avg=239356.50, stdev=65128.92, samples=20 00:22:26.327 iops : min= 626, max= 1696, avg=934.95, stdev=254.42, samples=20 00:22:26.327 lat (msec) : 10=0.13%, 20=0.14%, 50=10.24%, 100=85.52%, 250=3.97% 00:22:26.327 cpu : usr=2.12%, sys=3.34%, ctx=2218, majf=0, minf=245 00:22:26.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:26.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:26.327 issued rwts: total=0,9412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:26.327 job10: (groupid=0, jobs=1): err= 0: pid=1412963: Sat Dec 14 17:25:22 2024 00:22:26.327 write: IOPS=867, BW=217MiB/s (227MB/s)(2179MiB/10044msec); 0 zone resets 00:22:26.327 slat (usec): min=25, max=22099, avg=1114.51, stdev=2210.92 00:22:26.327 clat (msec): min=4, max=132, avg=72.62, stdev=19.28 00:22:26.327 lat (msec): min=4, max=132, avg=73.74, stdev=19.59 00:22:26.327 clat percentiles (msec): 00:22:26.327 | 1.00th=[ 48], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 55], 00:22:26.327 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 65], 60.00th=[ 88], 00:22:26.327 | 70.00th=[ 90], 80.00th=[ 92], 90.00th=[ 95], 95.00th=[ 100], 00:22:26.327 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 125], 99.95th=[ 128], 00:22:26.327 | 99.99th=[ 133] 00:22:26.327 bw ( KiB/s): min=157184, max=295424, per=6.32%, avg=221483.05, stdev=56136.04, samples=20 00:22:26.327 iops : min= 614, max= 1154, avg=865.15, stdev=219.30, samples=20 00:22:26.327 lat (msec) : 10=0.07%, 20=0.14%, 50=2.75%, 100=92.39%, 250=4.65% 00:22:26.327 cpu : usr=2.01%, sys=3.86%, ctx=2205, majf=0, minf=10 00:22:26.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:26.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:26.327 issued rwts: total=0,8714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:26.328 00:22:26.328 Run status group 0 (all jobs): 00:22:26.328 WRITE: bw=3425MiB/s (3591MB/s), 217MiB/s-660MiB/s (227MB/s-692MB/s), io=33.6GiB (36.1GB), run=10030-10055msec 00:22:26.328 00:22:26.328 Disk stats (read/write): 00:22:26.328 nvme0n1: ios=49/17727, merge=0/0, ticks=16/1214025, in_queue=1214041, util=96.68% 00:22:26.328 nvme10n1: ios=0/18328, merge=0/0, ticks=0/1214612, in_queue=1214612, util=96.85% 00:22:26.328 nvme1n1: ios=0/22596, merge=0/0, ticks=0/1218579, in_queue=1218579, util=97.19% 00:22:26.328 nvme2n1: ios=0/52410, merge=0/0, ticks=0/1218984, in_queue=1218984, util=97.37% 00:22:26.328 nvme3n1: ios=0/20617, merge=0/0, ticks=0/1217739, in_queue=1217739, util=97.46% 00:22:26.328 nvme4n1: ios=0/17577, merge=0/0, ticks=0/1213743, in_queue=1213743, util=97.82% 00:22:26.328 nvme5n1: ios=0/19809, merge=0/0, ticks=0/1218007, in_queue=1218007, util=98.00% 00:22:26.328 nvme6n1: ios=0/17603, merge=0/0, ticks=0/1213558, in_queue=1213558, util=98.15% 00:22:26.328 nvme7n1: ios=0/48601, merge=0/0, ticks=0/1225015, in_queue=1225015, util=98.59% 00:22:26.328 nvme8n1: ios=0/18429, merge=0/0, ticks=0/1218121, in_queue=1218121, util=98.82% 00:22:26.328 nvme9n1: ios=0/17031, merge=0/0, ticks=0/1215478, in_queue=1215478, util=98.98% 00:22:26.328 17:25:22 -- target/multiconnection.sh@36 -- # sync 00:22:26.328 17:25:22 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:26.328 17:25:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:26.328 17:25:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:26.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:26.587 17:25:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:26.587 17:25:23 -- common/autotest_common.sh@1208 -- # local i=0 00:22:26.587 17:25:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:26.587 17:25:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:22:26.587 17:25:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:26.587 17:25:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:22:26.587 17:25:23 -- common/autotest_common.sh@1220 -- # return 0 00:22:26.587 17:25:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:26.587 17:25:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.587 17:25:23 -- common/autotest_common.sh@10 -- # set +x 00:22:26.587 17:25:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.587 17:25:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:26.587 17:25:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:27.524 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:27.524 17:25:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:27.524 17:25:24 -- common/autotest_common.sh@1208 -- # local i=0 00:22:27.524 17:25:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:27.525 17:25:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:22:27.525 17:25:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:27.525 17:25:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:22:27.525 17:25:24 -- common/autotest_common.sh@1220 -- # return 0 00:22:27.525 17:25:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:27.525 17:25:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.525 17:25:24 -- common/autotest_common.sh@10 -- # set +x 00:22:27.525 17:25:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.525 17:25:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:27.525 17:25:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:28.905 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:28.905 17:25:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:28.905 17:25:25 -- common/autotest_common.sh@1208 -- # local i=0 00:22:28.905 17:25:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:28.905 17:25:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:22:28.905 17:25:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:28.905 17:25:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:22:28.905 17:25:25 -- common/autotest_common.sh@1220 -- # return 0 00:22:28.905 17:25:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:28.905 17:25:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.905 17:25:25 -- common/autotest_common.sh@10 -- # set +x 00:22:28.905 17:25:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.905 17:25:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.905 17:25:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:29.841 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:29.841 17:25:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:29.841 17:25:26 -- common/autotest_common.sh@1208 -- # local i=0 00:22:29.841 17:25:26 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:29.841 17:25:26 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:22:29.841 17:25:26 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:29.841 17:25:26 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:22:29.841 17:25:26 -- common/autotest_common.sh@1220 -- # return 0 00:22:29.841 17:25:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:29.841 17:25:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.841 17:25:26 -- common/autotest_common.sh@10 -- # set +x 00:22:29.841 17:25:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.841 17:25:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.841 17:25:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:30.845 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:30.845 17:25:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:30.845 17:25:27 -- common/autotest_common.sh@1208 -- # local i=0 00:22:30.845 17:25:27 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:30.845 17:25:27 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:22:30.845 17:25:27 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:30.845 17:25:27 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:22:30.845 17:25:27 -- common/autotest_common.sh@1220 -- # return 0 00:22:30.845 17:25:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:30.845 17:25:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.845 17:25:27 -- common/autotest_common.sh@10 -- # set +x 00:22:30.845 17:25:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.845 17:25:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:30.845 17:25:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:31.782 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:31.782 17:25:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:31.782 17:25:28 -- common/autotest_common.sh@1208 -- # local i=0 00:22:31.782 17:25:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:31.782 17:25:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:22:31.782 17:25:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:31.782 17:25:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:22:31.782 17:25:28 -- common/autotest_common.sh@1220 -- # return 0 00:22:31.782 17:25:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:31.782 17:25:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.782 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:22:31.782 17:25:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.782 17:25:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:31.782 17:25:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:32.720 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:32.720 17:25:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:32.720 17:25:29 -- common/autotest_common.sh@1208 -- # local i=0 00:22:32.720 17:25:29 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:32.720 17:25:29 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:22:32.720 17:25:29 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:22:32.720 17:25:29 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:32.720 17:25:29 -- common/autotest_common.sh@1220 -- # return 0 00:22:32.720 17:25:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:32.720 17:25:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.720 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:22:32.720 17:25:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.720 17:25:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.720 17:25:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:33.657 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:33.657 17:25:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:33.657 17:25:30 -- common/autotest_common.sh@1208 -- # local i=0 00:22:33.657 17:25:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:33.657 17:25:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:22:33.657 17:25:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:33.657 17:25:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:22:33.657 17:25:30 -- common/autotest_common.sh@1220 -- # return 0 00:22:33.657 17:25:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:33.657 17:25:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.657 17:25:30 -- common/autotest_common.sh@10 -- # set +x 00:22:33.657 17:25:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.657 17:25:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.657 17:25:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:35.036 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:35.036 17:25:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:35.036 17:25:31 -- common/autotest_common.sh@1208 -- # local i=0 00:22:35.036 17:25:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:22:35.036 17:25:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:35.036 17:25:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:35.036 17:25:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:22:35.036 17:25:31 -- common/autotest_common.sh@1220 -- # return 0 00:22:35.036 17:25:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:35.036 17:25:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.036 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:22:35.036 17:25:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.036 17:25:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.036 17:25:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:35.973 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:35.973 17:25:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:35.973 17:25:32 -- common/autotest_common.sh@1208 -- # local i=0 00:22:35.973 17:25:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:35.973 17:25:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:22:35.973 17:25:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:35.973 17:25:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:22:35.973 17:25:32 -- common/autotest_common.sh@1220 -- # return 0 00:22:35.973 17:25:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:35.973 17:25:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.973 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:22:35.973 17:25:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.973 17:25:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.973 17:25:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:36.911 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:36.911 17:25:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:36.911 17:25:33 -- common/autotest_common.sh@1208 -- # local i=0 00:22:36.911 17:25:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:36.911 17:25:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:22:36.911 17:25:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:36.911 17:25:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:22:36.911 17:25:33 -- common/autotest_common.sh@1220 -- # return 0 00:22:36.911 17:25:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:36.911 17:25:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.911 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:22:36.911 17:25:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.911 17:25:33 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:36.911 17:25:33 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:36.911 17:25:33 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:36.911 17:25:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:36.911 17:25:33 -- nvmf/common.sh@116 -- # sync 00:22:36.911 17:25:33 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:22:36.911 17:25:33 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:22:36.911 17:25:33 -- nvmf/common.sh@119 -- # set +e 00:22:36.911 17:25:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:36.911 17:25:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:22:36.911 rmmod nvme_rdma 00:22:36.911 rmmod nvme_fabrics 00:22:36.911 17:25:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:36.911 17:25:33 -- nvmf/common.sh@123 -- # set -e 00:22:36.911 17:25:33 -- nvmf/common.sh@124 -- # return 0 00:22:36.911 17:25:33 -- nvmf/common.sh@477 -- # '[' -n 1404707 ']' 00:22:36.911 17:25:33 -- nvmf/common.sh@478 -- # killprocess 1404707 00:22:36.911 17:25:33 -- common/autotest_common.sh@936 -- # '[' -z 1404707 ']' 00:22:36.911 17:25:33 -- common/autotest_common.sh@940 -- # kill -0 1404707 00:22:36.911 17:25:33 -- common/autotest_common.sh@941 -- # uname 00:22:36.911 17:25:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:36.911 17:25:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1404707 00:22:36.911 17:25:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:36.911 17:25:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:36.911 17:25:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1404707' 00:22:36.911 killing process with pid 1404707 00:22:36.911 17:25:33 -- common/autotest_common.sh@955 -- # kill 1404707 00:22:36.911 17:25:33 -- common/autotest_common.sh@960 -- # wait 1404707 00:22:37.480 17:25:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:37.480 17:25:33 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:22:37.480 00:22:37.480 real 1m15.932s 00:22:37.480 user 4m55.183s 00:22:37.480 sys 0m19.984s 00:22:37.480 17:25:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:37.480 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:22:37.480 ************************************ 00:22:37.480 END TEST nvmf_multiconnection 00:22:37.480 ************************************ 00:22:37.480 17:25:34 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:37.480 17:25:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:37.480 17:25:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:37.480 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:22:37.480 ************************************ 00:22:37.480 START TEST nvmf_initiator_timeout 00:22:37.480 ************************************ 00:22:37.480 17:25:34 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:37.480 * Looking for test storage... 00:22:37.480 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:37.480 17:25:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:37.481 17:25:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:37.481 17:25:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:37.740 17:25:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:37.741 17:25:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:37.741 17:25:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:37.741 17:25:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:37.741 17:25:34 -- scripts/common.sh@335 -- # IFS=.-: 00:22:37.741 17:25:34 -- scripts/common.sh@335 -- # read -ra ver1 00:22:37.741 17:25:34 -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.741 17:25:34 -- scripts/common.sh@336 -- # read -ra ver2 00:22:37.741 17:25:34 -- scripts/common.sh@337 -- # local 'op=<' 00:22:37.741 17:25:34 -- scripts/common.sh@339 -- # ver1_l=2 00:22:37.741 17:25:34 -- scripts/common.sh@340 -- # ver2_l=1 00:22:37.741 17:25:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:37.741 17:25:34 -- scripts/common.sh@343 -- # case "$op" in 00:22:37.741 17:25:34 -- scripts/common.sh@344 -- # : 1 00:22:37.741 17:25:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:37.741 17:25:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.741 17:25:34 -- scripts/common.sh@364 -- # decimal 1 00:22:37.741 17:25:34 -- scripts/common.sh@352 -- # local d=1 00:22:37.741 17:25:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.741 17:25:34 -- scripts/common.sh@354 -- # echo 1 00:22:37.741 17:25:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:37.741 17:25:34 -- scripts/common.sh@365 -- # decimal 2 00:22:37.741 17:25:34 -- scripts/common.sh@352 -- # local d=2 00:22:37.741 17:25:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.741 17:25:34 -- scripts/common.sh@354 -- # echo 2 00:22:37.741 17:25:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:37.741 17:25:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:37.741 17:25:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:37.741 17:25:34 -- scripts/common.sh@367 -- # return 0 00:22:37.741 17:25:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.741 17:25:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:37.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.741 --rc genhtml_branch_coverage=1 00:22:37.741 --rc genhtml_function_coverage=1 00:22:37.741 --rc genhtml_legend=1 00:22:37.741 --rc geninfo_all_blocks=1 00:22:37.741 --rc geninfo_unexecuted_blocks=1 00:22:37.741 00:22:37.741 ' 00:22:37.741 17:25:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:37.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.741 --rc genhtml_branch_coverage=1 00:22:37.741 --rc genhtml_function_coverage=1 00:22:37.741 --rc genhtml_legend=1 00:22:37.741 --rc geninfo_all_blocks=1 00:22:37.741 --rc geninfo_unexecuted_blocks=1 00:22:37.741 00:22:37.741 ' 00:22:37.741 17:25:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:37.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.741 --rc genhtml_branch_coverage=1 00:22:37.741 --rc genhtml_function_coverage=1 00:22:37.741 --rc genhtml_legend=1 00:22:37.741 --rc geninfo_all_blocks=1 00:22:37.741 --rc geninfo_unexecuted_blocks=1 00:22:37.741 00:22:37.741 ' 00:22:37.741 17:25:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:37.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.741 --rc genhtml_branch_coverage=1 00:22:37.741 --rc genhtml_function_coverage=1 00:22:37.741 --rc genhtml_legend=1 00:22:37.741 --rc geninfo_all_blocks=1 00:22:37.741 --rc geninfo_unexecuted_blocks=1 00:22:37.741 00:22:37.741 ' 00:22:37.741 17:25:34 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.741 17:25:34 -- nvmf/common.sh@7 -- # uname -s 00:22:37.741 17:25:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.741 17:25:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.741 17:25:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.741 17:25:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.741 17:25:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.741 17:25:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.741 17:25:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.741 17:25:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.741 17:25:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.741 17:25:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.741 17:25:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:37.741 17:25:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:37.741 17:25:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.741 17:25:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.741 17:25:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.741 17:25:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:37.741 17:25:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.741 17:25:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.741 17:25:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.741 17:25:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.741 17:25:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.741 17:25:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.741 17:25:34 -- paths/export.sh@5 -- # export PATH 00:22:37.741 17:25:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.741 17:25:34 -- nvmf/common.sh@46 -- # : 0 00:22:37.741 17:25:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:37.741 17:25:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:37.741 17:25:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:37.741 17:25:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.741 17:25:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.741 17:25:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:37.741 17:25:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:37.741 17:25:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:37.741 17:25:34 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.741 17:25:34 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.741 17:25:34 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:37.741 17:25:34 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:22:37.741 17:25:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.741 17:25:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:37.741 17:25:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:37.741 17:25:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:37.741 17:25:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.741 17:25:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.741 17:25:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.741 17:25:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:37.741 17:25:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:37.741 17:25:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:37.741 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:22:44.314 17:25:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:44.314 17:25:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:44.314 17:25:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:44.314 17:25:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:44.314 17:25:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:44.314 17:25:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:44.314 17:25:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:44.314 17:25:40 -- nvmf/common.sh@294 -- # net_devs=() 00:22:44.314 17:25:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:44.314 17:25:40 -- nvmf/common.sh@295 -- # e810=() 00:22:44.314 17:25:40 -- nvmf/common.sh@295 -- # local -ga e810 00:22:44.314 17:25:40 -- nvmf/common.sh@296 -- # x722=() 00:22:44.314 17:25:40 -- nvmf/common.sh@296 -- # local -ga x722 00:22:44.314 17:25:40 -- nvmf/common.sh@297 -- # mlx=() 00:22:44.314 17:25:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:44.314 17:25:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.314 17:25:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.314 17:25:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.314 17:25:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.314 17:25:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.314 17:25:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.314 17:25:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.314 17:25:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.314 17:25:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.314 17:25:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.314 17:25:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.314 17:25:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:44.314 17:25:40 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:22:44.314 17:25:40 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:22:44.314 17:25:40 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:22:44.314 17:25:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:44.314 17:25:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:44.314 17:25:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:44.314 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:44.314 17:25:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:44.314 17:25:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:44.314 17:25:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:44.314 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:44.314 17:25:40 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:44.314 17:25:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:44.314 17:25:40 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:44.314 17:25:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.314 17:25:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:44.314 17:25:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.314 17:25:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:44.314 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:44.314 17:25:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.314 17:25:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:44.314 17:25:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.314 17:25:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:44.314 17:25:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.314 17:25:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:44.314 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:44.314 17:25:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.314 17:25:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:44.314 17:25:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:44.314 17:25:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:22:44.314 17:25:40 -- nvmf/common.sh@408 -- # rdma_device_init 00:22:44.314 17:25:40 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:22:44.314 17:25:40 -- nvmf/common.sh@57 -- # uname 00:22:44.314 17:25:40 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:22:44.314 17:25:40 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:22:44.314 17:25:40 -- nvmf/common.sh@62 -- # modprobe ib_core 00:22:44.314 17:25:40 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:22:44.314 17:25:40 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:22:44.314 17:25:40 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:22:44.314 17:25:40 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:22:44.314 17:25:40 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:22:44.314 17:25:40 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:22:44.314 17:25:40 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:44.314 17:25:40 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:22:44.314 17:25:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:44.314 17:25:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:44.314 17:25:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:44.314 17:25:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:44.314 17:25:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:44.314 17:25:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:44.315 17:25:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:44.315 17:25:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:44.315 17:25:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:44.315 17:25:40 -- nvmf/common.sh@104 -- # continue 2 00:22:44.315 17:25:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:44.315 17:25:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:44.315 17:25:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:44.315 17:25:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:44.315 17:25:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:44.315 17:25:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:44.315 17:25:40 -- nvmf/common.sh@104 -- # continue 2 00:22:44.315 17:25:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:44.315 17:25:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:22:44.315 17:25:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:44.315 17:25:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:44.315 17:25:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:44.315 17:25:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:44.315 17:25:40 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:22:44.315 17:25:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:22:44.315 17:25:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:22:44.315 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:44.315 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:44.315 altname enp217s0f0np0 00:22:44.315 altname ens818f0np0 00:22:44.315 inet 192.168.100.8/24 scope global mlx_0_0 00:22:44.315 valid_lft forever preferred_lft forever 00:22:44.315 17:25:40 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:44.315 17:25:40 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:22:44.315 17:25:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:44.315 17:25:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:44.315 17:25:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:44.315 17:25:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:44.315 17:25:40 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:22:44.315 17:25:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:22:44.315 17:25:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:22:44.575 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:44.575 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:44.575 altname enp217s0f1np1 00:22:44.575 altname ens818f1np1 00:22:44.575 inet 192.168.100.9/24 scope global mlx_0_1 00:22:44.575 valid_lft forever preferred_lft forever 00:22:44.575 17:25:40 -- nvmf/common.sh@410 -- # return 0 00:22:44.575 17:25:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:44.575 17:25:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:44.575 17:25:41 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:22:44.575 17:25:41 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:22:44.575 17:25:41 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:22:44.575 17:25:41 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:44.575 17:25:41 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:44.575 17:25:41 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:44.575 17:25:41 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:44.575 17:25:41 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:44.575 17:25:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:44.575 17:25:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:44.575 17:25:41 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:44.575 17:25:41 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:44.575 17:25:41 -- nvmf/common.sh@104 -- # continue 2 00:22:44.575 17:25:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:44.575 17:25:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:44.575 17:25:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:44.575 17:25:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:44.575 17:25:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:44.575 17:25:41 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:44.575 17:25:41 -- nvmf/common.sh@104 -- # continue 2 00:22:44.575 17:25:41 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:44.575 17:25:41 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:22:44.575 17:25:41 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:44.575 17:25:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:44.575 17:25:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:44.575 17:25:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:44.575 17:25:41 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:44.575 17:25:41 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:22:44.575 17:25:41 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:44.575 17:25:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:44.575 17:25:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:44.575 17:25:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:44.575 17:25:41 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:22:44.575 192.168.100.9' 00:22:44.575 17:25:41 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:22:44.575 192.168.100.9' 00:22:44.575 17:25:41 -- nvmf/common.sh@445 -- # head -n 1 00:22:44.575 17:25:41 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:44.575 17:25:41 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:44.575 192.168.100.9' 00:22:44.575 17:25:41 -- nvmf/common.sh@446 -- # tail -n +2 00:22:44.575 17:25:41 -- nvmf/common.sh@446 -- # head -n 1 00:22:44.575 17:25:41 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:44.575 17:25:41 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:22:44.575 17:25:41 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:44.575 17:25:41 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:22:44.575 17:25:41 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:22:44.575 17:25:41 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:22:44.575 17:25:41 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:44.575 17:25:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:44.575 17:25:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:44.575 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:22:44.575 17:25:41 -- nvmf/common.sh@469 -- # nvmfpid=1419763 00:22:44.575 17:25:41 -- nvmf/common.sh@470 -- # waitforlisten 1419763 00:22:44.575 17:25:41 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:44.575 17:25:41 -- common/autotest_common.sh@829 -- # '[' -z 1419763 ']' 00:22:44.575 17:25:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.575 17:25:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.575 17:25:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.575 17:25:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.575 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:22:44.575 [2024-12-14 17:25:41.165939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:44.575 [2024-12-14 17:25:41.165993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.575 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.575 [2024-12-14 17:25:41.235596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.834 [2024-12-14 17:25:41.275392] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:44.834 [2024-12-14 17:25:41.275506] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.834 [2024-12-14 17:25:41.275517] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.834 [2024-12-14 17:25:41.275526] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.834 [2024-12-14 17:25:41.275573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.834 [2024-12-14 17:25:41.275597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.834 [2024-12-14 17:25:41.275705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.834 [2024-12-14 17:25:41.275707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.403 17:25:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.403 17:25:41 -- common/autotest_common.sh@862 -- # return 0 00:22:45.403 17:25:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:45.403 17:25:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:45.403 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:22:45.403 17:25:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.403 17:25:42 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:45.403 17:25:42 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:45.403 17:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.403 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:45.403 Malloc0 00:22:45.403 17:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.403 17:25:42 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:45.403 17:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.403 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:45.403 Delay0 00:22:45.403 17:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.403 17:25:42 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:45.403 17:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.403 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:45.662 [2024-12-14 17:25:42.103705] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x248a5b0/0x2494980) succeed. 00:22:45.662 [2024-12-14 17:25:42.113140] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x248bb50/0x24d6020) succeed. 00:22:45.662 17:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.662 17:25:42 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:45.662 17:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.662 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:45.662 17:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.662 17:25:42 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:45.662 17:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.662 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:45.662 17:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.662 17:25:42 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:45.662 17:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.662 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:45.662 [2024-12-14 17:25:42.256464] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:45.662 17:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.662 17:25:42 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:46.600 17:25:43 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:46.600 17:25:43 -- common/autotest_common.sh@1187 -- # local i=0 00:22:46.600 17:25:43 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:46.600 17:25:43 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:46.600 17:25:43 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:49.133 17:25:45 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:49.133 17:25:45 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:49.133 17:25:45 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:22:49.133 17:25:45 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:49.133 17:25:45 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:49.133 17:25:45 -- common/autotest_common.sh@1197 -- # return 0 00:22:49.133 17:25:45 -- target/initiator_timeout.sh@35 -- # fio_pid=1420599 00:22:49.133 17:25:45 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:49.133 17:25:45 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:49.133 [global] 00:22:49.133 thread=1 00:22:49.133 invalidate=1 00:22:49.133 rw=write 00:22:49.133 time_based=1 00:22:49.133 runtime=60 00:22:49.133 ioengine=libaio 00:22:49.133 direct=1 00:22:49.133 bs=4096 00:22:49.133 iodepth=1 00:22:49.133 norandommap=0 00:22:49.133 numjobs=1 00:22:49.133 00:22:49.133 verify_dump=1 00:22:49.133 verify_backlog=512 00:22:49.133 verify_state_save=0 00:22:49.133 do_verify=1 00:22:49.133 verify=crc32c-intel 00:22:49.133 [job0] 00:22:49.133 filename=/dev/nvme0n1 00:22:49.133 Could not set queue depth (nvme0n1) 00:22:49.133 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:49.133 fio-3.35 00:22:49.133 Starting 1 thread 00:22:51.666 17:25:48 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:51.666 17:25:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.666 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:51.666 true 00:22:51.666 17:25:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.666 17:25:48 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:51.666 17:25:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.666 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:51.666 true 00:22:51.666 17:25:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.666 17:25:48 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:51.666 17:25:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.666 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:51.666 true 00:22:51.666 17:25:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.666 17:25:48 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:51.666 17:25:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.666 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:51.666 true 00:22:51.666 17:25:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.666 17:25:48 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:54.955 17:25:51 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:54.955 17:25:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.955 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:54.955 true 00:22:54.955 17:25:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.955 17:25:51 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:54.955 17:25:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.955 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:54.955 true 00:22:54.955 17:25:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.955 17:25:51 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:54.955 17:25:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.955 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:54.955 true 00:22:54.955 17:25:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.955 17:25:51 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:54.955 17:25:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.955 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:54.955 true 00:22:54.955 17:25:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.955 17:25:51 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:54.955 17:25:51 -- target/initiator_timeout.sh@54 -- # wait 1420599 00:23:51.192 00:23:51.192 job0: (groupid=0, jobs=1): err= 0: pid=1420732: Sat Dec 14 17:26:45 2024 00:23:51.192 read: IOPS=1243, BW=4974KiB/s (5093kB/s)(291MiB/60000msec) 00:23:51.192 slat (usec): min=8, max=16936, avg= 9.62, stdev=83.18 00:23:51.192 clat (usec): min=47, max=42583k, avg=675.34, stdev=155896.87 00:23:51.192 lat (usec): min=91, max=42583k, avg=684.96, stdev=155896.88 00:23:51.192 clat percentiles (usec): 00:23:51.192 | 1.00th=[ 92], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 99], 00:23:51.192 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 106], 00:23:51.192 | 70.00th=[ 109], 80.00th=[ 111], 90.00th=[ 114], 95.00th=[ 117], 00:23:51.192 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 130], 99.95th=[ 143], 00:23:51.192 | 99.99th=[ 277] 00:23:51.192 write: IOPS=1245, BW=4983KiB/s (5103kB/s)(292MiB/60000msec); 0 zone resets 00:23:51.192 slat (usec): min=8, max=313, avg=11.96, stdev= 2.22 00:23:51.192 clat (usec): min=46, max=305, avg=101.77, stdev= 6.89 00:23:51.192 lat (usec): min=95, max=434, avg=113.73, stdev= 7.26 00:23:51.192 clat percentiles (usec): 00:23:51.192 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 96], 00:23:51.192 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 103], 00:23:51.192 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 114], 00:23:51.192 | 99.00th=[ 119], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 137], 00:23:51.192 | 99.99th=[ 258] 00:23:51.192 bw ( KiB/s): min= 4096, max=19456, per=100.00%, avg=16648.46, stdev=2475.86, samples=35 00:23:51.192 iops : min= 1024, max= 4864, avg=4162.11, stdev=618.97, samples=35 00:23:51.192 lat (usec) : 50=0.01%, 100=33.42%, 250=66.56%, 500=0.01% 00:23:51.192 lat (msec) : >=2000=0.01% 00:23:51.192 cpu : usr=1.95%, sys=3.23%, ctx=149371, majf=0, minf=144 00:23:51.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:51.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.192 issued rwts: total=74611,74752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:51.192 00:23:51.192 Run status group 0 (all jobs): 00:23:51.192 READ: bw=4974KiB/s (5093kB/s), 4974KiB/s-4974KiB/s (5093kB/s-5093kB/s), io=291MiB (306MB), run=60000-60000msec 00:23:51.192 WRITE: bw=4983KiB/s (5103kB/s), 4983KiB/s-4983KiB/s (5103kB/s-5103kB/s), io=292MiB (306MB), run=60000-60000msec 00:23:51.192 00:23:51.192 Disk stats (read/write): 00:23:51.192 nvme0n1: ios=74541/74252, merge=0/0, ticks=7165/6872, in_queue=14037, util=99.86% 00:23:51.192 17:26:45 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:51.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:51.192 17:26:46 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:51.192 17:26:46 -- common/autotest_common.sh@1208 -- # local i=0 00:23:51.192 17:26:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:51.192 17:26:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:51.192 17:26:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:51.192 17:26:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:51.192 17:26:46 -- common/autotest_common.sh@1220 -- # return 0 00:23:51.192 17:26:46 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:51.192 17:26:46 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:51.192 nvmf hotplug test: fio successful as expected 00:23:51.192 17:26:46 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:51.192 17:26:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.192 17:26:46 -- common/autotest_common.sh@10 -- # set +x 00:23:51.192 17:26:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.192 17:26:46 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:51.192 17:26:46 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:51.192 17:26:46 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:51.192 17:26:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:51.192 17:26:46 -- nvmf/common.sh@116 -- # sync 00:23:51.192 17:26:46 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:51.192 17:26:46 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:51.192 17:26:46 -- nvmf/common.sh@119 -- # set +e 00:23:51.192 17:26:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:51.192 17:26:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:51.192 rmmod nvme_rdma 00:23:51.192 rmmod nvme_fabrics 00:23:51.192 17:26:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:51.192 17:26:46 -- nvmf/common.sh@123 -- # set -e 00:23:51.192 17:26:46 -- nvmf/common.sh@124 -- # return 0 00:23:51.192 17:26:46 -- nvmf/common.sh@477 -- # '[' -n 1419763 ']' 00:23:51.192 17:26:46 -- nvmf/common.sh@478 -- # killprocess 1419763 00:23:51.192 17:26:46 -- common/autotest_common.sh@936 -- # '[' -z 1419763 ']' 00:23:51.192 17:26:46 -- common/autotest_common.sh@940 -- # kill -0 1419763 00:23:51.192 17:26:46 -- common/autotest_common.sh@941 -- # uname 00:23:51.192 17:26:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:51.192 17:26:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1419763 00:23:51.192 17:26:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:51.192 17:26:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:51.192 17:26:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1419763' 00:23:51.192 killing process with pid 1419763 00:23:51.192 17:26:46 -- common/autotest_common.sh@955 -- # kill 1419763 00:23:51.192 17:26:46 -- common/autotest_common.sh@960 -- # wait 1419763 00:23:51.192 17:26:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:51.192 17:26:47 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:51.192 00:23:51.192 real 1m13.143s 00:23:51.192 user 4m33.738s 00:23:51.192 sys 0m8.074s 00:23:51.192 17:26:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:51.192 17:26:47 -- common/autotest_common.sh@10 -- # set +x 00:23:51.192 ************************************ 00:23:51.192 END TEST nvmf_initiator_timeout 00:23:51.192 ************************************ 00:23:51.193 17:26:47 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:51.193 17:26:47 -- nvmf/nvmf.sh@70 -- # '[' rdma = tcp ']' 00:23:51.193 17:26:47 -- nvmf/nvmf.sh@76 -- # [[ '' -eq 1 ]] 00:23:51.193 17:26:47 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:51.193 17:26:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:51.193 17:26:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:51.193 17:26:47 -- common/autotest_common.sh@10 -- # set +x 00:23:51.193 ************************************ 00:23:51.193 START TEST nvmf_shutdown 00:23:51.193 ************************************ 00:23:51.193 17:26:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:51.193 * Looking for test storage... 00:23:51.193 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:51.193 17:26:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:51.193 17:26:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:51.193 17:26:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:51.193 17:26:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:51.193 17:26:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:51.193 17:26:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:51.193 17:26:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:51.193 17:26:47 -- scripts/common.sh@335 -- # IFS=.-: 00:23:51.193 17:26:47 -- scripts/common.sh@335 -- # read -ra ver1 00:23:51.193 17:26:47 -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.193 17:26:47 -- scripts/common.sh@336 -- # read -ra ver2 00:23:51.193 17:26:47 -- scripts/common.sh@337 -- # local 'op=<' 00:23:51.193 17:26:47 -- scripts/common.sh@339 -- # ver1_l=2 00:23:51.193 17:26:47 -- scripts/common.sh@340 -- # ver2_l=1 00:23:51.193 17:26:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:51.193 17:26:47 -- scripts/common.sh@343 -- # case "$op" in 00:23:51.193 17:26:47 -- scripts/common.sh@344 -- # : 1 00:23:51.193 17:26:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:51.193 17:26:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.193 17:26:47 -- scripts/common.sh@364 -- # decimal 1 00:23:51.193 17:26:47 -- scripts/common.sh@352 -- # local d=1 00:23:51.193 17:26:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.193 17:26:47 -- scripts/common.sh@354 -- # echo 1 00:23:51.193 17:26:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:51.193 17:26:47 -- scripts/common.sh@365 -- # decimal 2 00:23:51.193 17:26:47 -- scripts/common.sh@352 -- # local d=2 00:23:51.193 17:26:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.193 17:26:47 -- scripts/common.sh@354 -- # echo 2 00:23:51.193 17:26:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:51.193 17:26:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:51.193 17:26:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:51.193 17:26:47 -- scripts/common.sh@367 -- # return 0 00:23:51.193 17:26:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.193 17:26:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:51.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.193 --rc genhtml_branch_coverage=1 00:23:51.193 --rc genhtml_function_coverage=1 00:23:51.193 --rc genhtml_legend=1 00:23:51.193 --rc geninfo_all_blocks=1 00:23:51.193 --rc geninfo_unexecuted_blocks=1 00:23:51.193 00:23:51.193 ' 00:23:51.193 17:26:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:51.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.193 --rc genhtml_branch_coverage=1 00:23:51.193 --rc genhtml_function_coverage=1 00:23:51.193 --rc genhtml_legend=1 00:23:51.193 --rc geninfo_all_blocks=1 00:23:51.193 --rc geninfo_unexecuted_blocks=1 00:23:51.193 00:23:51.193 ' 00:23:51.193 17:26:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:51.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.193 --rc genhtml_branch_coverage=1 00:23:51.193 --rc genhtml_function_coverage=1 00:23:51.193 --rc genhtml_legend=1 00:23:51.193 --rc geninfo_all_blocks=1 00:23:51.193 --rc geninfo_unexecuted_blocks=1 00:23:51.193 00:23:51.193 ' 00:23:51.193 17:26:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:51.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.193 --rc genhtml_branch_coverage=1 00:23:51.193 --rc genhtml_function_coverage=1 00:23:51.193 --rc genhtml_legend=1 00:23:51.193 --rc geninfo_all_blocks=1 00:23:51.193 --rc geninfo_unexecuted_blocks=1 00:23:51.193 00:23:51.193 ' 00:23:51.193 17:26:47 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.193 17:26:47 -- nvmf/common.sh@7 -- # uname -s 00:23:51.193 17:26:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.193 17:26:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.193 17:26:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.193 17:26:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.193 17:26:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.193 17:26:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.193 17:26:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.193 17:26:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.193 17:26:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.193 17:26:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.193 17:26:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:51.193 17:26:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:51.193 17:26:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.193 17:26:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.193 17:26:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.193 17:26:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:51.193 17:26:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.193 17:26:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.193 17:26:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.193 17:26:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.193 17:26:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.193 17:26:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.193 17:26:47 -- paths/export.sh@5 -- # export PATH 00:23:51.193 17:26:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.193 17:26:47 -- nvmf/common.sh@46 -- # : 0 00:23:51.193 17:26:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:51.193 17:26:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:51.193 17:26:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:51.193 17:26:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.193 17:26:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.193 17:26:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:51.193 17:26:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:51.193 17:26:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:51.193 17:26:47 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:51.193 17:26:47 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:51.193 17:26:47 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:51.193 17:26:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:51.193 17:26:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:51.193 17:26:47 -- common/autotest_common.sh@10 -- # set +x 00:23:51.193 ************************************ 00:23:51.193 START TEST nvmf_shutdown_tc1 00:23:51.193 ************************************ 00:23:51.193 17:26:47 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc1 00:23:51.193 17:26:47 -- target/shutdown.sh@74 -- # starttarget 00:23:51.193 17:26:47 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:51.193 17:26:47 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:51.193 17:26:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.193 17:26:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:51.193 17:26:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:51.193 17:26:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:51.193 17:26:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.193 17:26:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.193 17:26:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.193 17:26:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:51.193 17:26:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:51.193 17:26:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:51.193 17:26:47 -- common/autotest_common.sh@10 -- # set +x 00:23:57.811 17:26:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:57.811 17:26:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:57.811 17:26:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:57.811 17:26:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:57.811 17:26:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:57.811 17:26:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:57.811 17:26:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:57.811 17:26:53 -- nvmf/common.sh@294 -- # net_devs=() 00:23:57.811 17:26:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:57.811 17:26:53 -- nvmf/common.sh@295 -- # e810=() 00:23:57.811 17:26:53 -- nvmf/common.sh@295 -- # local -ga e810 00:23:57.811 17:26:53 -- nvmf/common.sh@296 -- # x722=() 00:23:57.811 17:26:53 -- nvmf/common.sh@296 -- # local -ga x722 00:23:57.811 17:26:53 -- nvmf/common.sh@297 -- # mlx=() 00:23:57.811 17:26:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:57.811 17:26:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.811 17:26:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.811 17:26:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.811 17:26:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.811 17:26:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.811 17:26:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.811 17:26:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.811 17:26:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.811 17:26:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.811 17:26:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.811 17:26:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.811 17:26:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:57.811 17:26:53 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:57.811 17:26:53 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:57.811 17:26:53 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:57.811 17:26:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:57.811 17:26:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:57.811 17:26:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:57.811 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:57.811 17:26:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:57.811 17:26:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:57.811 17:26:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:57.811 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:57.811 17:26:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:57.811 17:26:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:57.811 17:26:53 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:57.811 17:26:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.811 17:26:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:57.811 17:26:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.811 17:26:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:57.811 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:57.811 17:26:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.811 17:26:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:57.811 17:26:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.811 17:26:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:57.811 17:26:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.811 17:26:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:57.811 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:57.811 17:26:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.811 17:26:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:57.811 17:26:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:57.811 17:26:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:57.811 17:26:53 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:57.811 17:26:53 -- nvmf/common.sh@57 -- # uname 00:23:57.811 17:26:53 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:57.811 17:26:53 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:57.811 17:26:53 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:57.811 17:26:53 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:57.811 17:26:53 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:57.811 17:26:53 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:57.811 17:26:53 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:57.811 17:26:53 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:57.811 17:26:53 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:57.811 17:26:53 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:57.811 17:26:53 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:57.811 17:26:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:57.811 17:26:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:57.811 17:26:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:57.811 17:26:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:57.811 17:26:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:57.811 17:26:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:57.811 17:26:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:57.811 17:26:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:57.811 17:26:53 -- nvmf/common.sh@104 -- # continue 2 00:23:57.811 17:26:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:57.811 17:26:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:57.811 17:26:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:57.811 17:26:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:57.811 17:26:53 -- nvmf/common.sh@104 -- # continue 2 00:23:57.811 17:26:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:57.811 17:26:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:57.811 17:26:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:57.811 17:26:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:57.811 17:26:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:57.811 17:26:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:57.811 17:26:53 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:57.811 17:26:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:57.811 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:57.811 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:57.811 altname enp217s0f0np0 00:23:57.811 altname ens818f0np0 00:23:57.811 inet 192.168.100.8/24 scope global mlx_0_0 00:23:57.811 valid_lft forever preferred_lft forever 00:23:57.811 17:26:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:57.811 17:26:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:57.811 17:26:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:57.811 17:26:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:57.811 17:26:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:57.811 17:26:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:57.811 17:26:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:57.811 17:26:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:57.811 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:57.811 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:57.811 altname enp217s0f1np1 00:23:57.811 altname ens818f1np1 00:23:57.811 inet 192.168.100.9/24 scope global mlx_0_1 00:23:57.811 valid_lft forever preferred_lft forever 00:23:57.811 17:26:53 -- nvmf/common.sh@410 -- # return 0 00:23:57.811 17:26:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:57.811 17:26:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:57.811 17:26:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:57.811 17:26:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:57.811 17:26:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:57.811 17:26:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:57.811 17:26:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:57.811 17:26:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:57.811 17:26:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:57.811 17:26:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:57.811 17:26:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:57.811 17:26:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:57.811 17:26:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:57.812 17:26:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:57.812 17:26:53 -- nvmf/common.sh@104 -- # continue 2 00:23:57.812 17:26:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:57.812 17:26:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:57.812 17:26:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:57.812 17:26:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:57.812 17:26:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:57.812 17:26:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:57.812 17:26:53 -- nvmf/common.sh@104 -- # continue 2 00:23:57.812 17:26:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:57.812 17:26:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:57.812 17:26:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:57.812 17:26:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:57.812 17:26:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:57.812 17:26:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:57.812 17:26:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:57.812 17:26:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:57.812 17:26:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:57.812 17:26:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:57.812 17:26:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:57.812 17:26:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:57.812 17:26:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:57.812 192.168.100.9' 00:23:57.812 17:26:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:57.812 192.168.100.9' 00:23:57.812 17:26:53 -- nvmf/common.sh@445 -- # head -n 1 00:23:57.812 17:26:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:57.812 17:26:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:57.812 192.168.100.9' 00:23:57.812 17:26:53 -- nvmf/common.sh@446 -- # head -n 1 00:23:57.812 17:26:53 -- nvmf/common.sh@446 -- # tail -n +2 00:23:57.812 17:26:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:57.812 17:26:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:57.812 17:26:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:57.812 17:26:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:57.812 17:26:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:57.812 17:26:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:57.812 17:26:53 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:57.812 17:26:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:57.812 17:26:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.812 17:26:53 -- common/autotest_common.sh@10 -- # set +x 00:23:57.812 17:26:54 -- nvmf/common.sh@469 -- # nvmfpid=1434268 00:23:57.812 17:26:54 -- nvmf/common.sh@470 -- # waitforlisten 1434268 00:23:57.812 17:26:54 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:57.812 17:26:54 -- common/autotest_common.sh@829 -- # '[' -z 1434268 ']' 00:23:57.812 17:26:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.812 17:26:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.812 17:26:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.812 17:26:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.812 17:26:54 -- common/autotest_common.sh@10 -- # set +x 00:23:57.812 [2024-12-14 17:26:54.046529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:57.812 [2024-12-14 17:26:54.046576] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.812 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.812 [2024-12-14 17:26:54.112518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.812 [2024-12-14 17:26:54.150439] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:57.812 [2024-12-14 17:26:54.150565] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.812 [2024-12-14 17:26:54.150574] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.812 [2024-12-14 17:26:54.150583] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.812 [2024-12-14 17:26:54.150629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.812 [2024-12-14 17:26:54.150725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.812 [2024-12-14 17:26:54.150834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.812 [2024-12-14 17:26:54.150836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:58.380 17:26:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.380 17:26:54 -- common/autotest_common.sh@862 -- # return 0 00:23:58.381 17:26:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:58.381 17:26:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.381 17:26:54 -- common/autotest_common.sh@10 -- # set +x 00:23:58.381 17:26:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.381 17:26:54 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:58.381 17:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.381 17:26:54 -- common/autotest_common.sh@10 -- # set +x 00:23:58.381 [2024-12-14 17:26:54.939892] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17a33c0/0x17a7890) succeed. 00:23:58.381 [2024-12-14 17:26:54.949213] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17a4960/0x17e8f30) succeed. 00:23:58.381 17:26:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.381 17:26:55 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:58.643 17:26:55 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:58.643 17:26:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.643 17:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:58.643 17:26:55 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:58.643 17:26:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.643 17:26:55 -- target/shutdown.sh@28 -- # cat 00:23:58.643 17:26:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.643 17:26:55 -- target/shutdown.sh@28 -- # cat 00:23:58.643 17:26:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.643 17:26:55 -- target/shutdown.sh@28 -- # cat 00:23:58.643 17:26:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.643 17:26:55 -- target/shutdown.sh@28 -- # cat 00:23:58.643 17:26:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.643 17:26:55 -- target/shutdown.sh@28 -- # cat 00:23:58.643 17:26:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.643 17:26:55 -- target/shutdown.sh@28 -- # cat 00:23:58.643 17:26:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.643 17:26:55 -- target/shutdown.sh@28 -- # cat 00:23:58.643 17:26:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.643 17:26:55 -- target/shutdown.sh@28 -- # cat 00:23:58.643 17:26:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.643 17:26:55 -- target/shutdown.sh@28 -- # cat 00:23:58.643 17:26:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.643 17:26:55 -- target/shutdown.sh@28 -- # cat 00:23:58.643 17:26:55 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:58.643 17:26:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.643 17:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:58.643 Malloc1 00:23:58.643 [2024-12-14 17:26:55.170427] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:58.643 Malloc2 00:23:58.643 Malloc3 00:23:58.643 Malloc4 00:23:58.902 Malloc5 00:23:58.902 Malloc6 00:23:58.902 Malloc7 00:23:58.902 Malloc8 00:23:58.902 Malloc9 00:23:58.902 Malloc10 00:23:58.902 17:26:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.902 17:26:55 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:58.902 17:26:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.902 17:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:59.161 17:26:55 -- target/shutdown.sh@78 -- # perfpid=1434582 00:23:59.161 17:26:55 -- target/shutdown.sh@79 -- # waitforlisten 1434582 /var/tmp/bdevperf.sock 00:23:59.161 17:26:55 -- common/autotest_common.sh@829 -- # '[' -z 1434582 ']' 00:23:59.161 17:26:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.161 17:26:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.161 17:26:55 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:59.161 17:26:55 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:59.161 17:26:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.161 17:26:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.161 17:26:55 -- nvmf/common.sh@520 -- # config=() 00:23:59.161 17:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:59.161 17:26:55 -- nvmf/common.sh@520 -- # local subsystem config 00:23:59.161 17:26:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.161 17:26:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.161 { 00:23:59.161 "params": { 00:23:59.161 "name": "Nvme$subsystem", 00:23:59.161 "trtype": "$TEST_TRANSPORT", 00:23:59.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.161 "adrfam": "ipv4", 00:23:59.161 "trsvcid": "$NVMF_PORT", 00:23:59.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.161 "hdgst": ${hdgst:-false}, 00:23:59.161 "ddgst": ${ddgst:-false} 00:23:59.161 }, 00:23:59.161 "method": "bdev_nvme_attach_controller" 00:23:59.161 } 00:23:59.161 EOF 00:23:59.161 )") 00:23:59.161 17:26:55 -- nvmf/common.sh@542 -- # cat 00:23:59.161 17:26:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.161 17:26:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.161 { 00:23:59.161 "params": { 00:23:59.161 "name": "Nvme$subsystem", 00:23:59.161 "trtype": "$TEST_TRANSPORT", 00:23:59.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.161 "adrfam": "ipv4", 00:23:59.161 "trsvcid": "$NVMF_PORT", 00:23:59.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.161 "hdgst": ${hdgst:-false}, 00:23:59.161 "ddgst": ${ddgst:-false} 00:23:59.161 }, 00:23:59.161 "method": "bdev_nvme_attach_controller" 00:23:59.161 } 00:23:59.161 EOF 00:23:59.161 )") 00:23:59.161 17:26:55 -- nvmf/common.sh@542 -- # cat 00:23:59.161 17:26:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.161 17:26:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.161 { 00:23:59.161 "params": { 00:23:59.161 "name": "Nvme$subsystem", 00:23:59.161 "trtype": "$TEST_TRANSPORT", 00:23:59.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "$NVMF_PORT", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.162 "hdgst": ${hdgst:-false}, 00:23:59.162 "ddgst": ${ddgst:-false} 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 } 00:23:59.162 EOF 00:23:59.162 )") 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # cat 00:23:59.162 17:26:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.162 { 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme$subsystem", 00:23:59.162 "trtype": "$TEST_TRANSPORT", 00:23:59.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "$NVMF_PORT", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.162 "hdgst": ${hdgst:-false}, 00:23:59.162 "ddgst": ${ddgst:-false} 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 } 00:23:59.162 EOF 00:23:59.162 )") 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # cat 00:23:59.162 17:26:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.162 { 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme$subsystem", 00:23:59.162 "trtype": "$TEST_TRANSPORT", 00:23:59.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "$NVMF_PORT", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.162 "hdgst": ${hdgst:-false}, 00:23:59.162 "ddgst": ${ddgst:-false} 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 } 00:23:59.162 EOF 00:23:59.162 )") 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # cat 00:23:59.162 [2024-12-14 17:26:55.659292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:59.162 [2024-12-14 17:26:55.659346] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:59.162 17:26:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.162 { 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme$subsystem", 00:23:59.162 "trtype": "$TEST_TRANSPORT", 00:23:59.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "$NVMF_PORT", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.162 "hdgst": ${hdgst:-false}, 00:23:59.162 "ddgst": ${ddgst:-false} 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 } 00:23:59.162 EOF 00:23:59.162 )") 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # cat 00:23:59.162 17:26:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.162 { 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme$subsystem", 00:23:59.162 "trtype": "$TEST_TRANSPORT", 00:23:59.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "$NVMF_PORT", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.162 "hdgst": ${hdgst:-false}, 00:23:59.162 "ddgst": ${ddgst:-false} 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 } 00:23:59.162 EOF 00:23:59.162 )") 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # cat 00:23:59.162 17:26:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.162 { 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme$subsystem", 00:23:59.162 "trtype": "$TEST_TRANSPORT", 00:23:59.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "$NVMF_PORT", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.162 "hdgst": ${hdgst:-false}, 00:23:59.162 "ddgst": ${ddgst:-false} 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 } 00:23:59.162 EOF 00:23:59.162 )") 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # cat 00:23:59.162 17:26:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.162 { 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme$subsystem", 00:23:59.162 "trtype": "$TEST_TRANSPORT", 00:23:59.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "$NVMF_PORT", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.162 "hdgst": ${hdgst:-false}, 00:23:59.162 "ddgst": ${ddgst:-false} 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 } 00:23:59.162 EOF 00:23:59.162 )") 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # cat 00:23:59.162 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.162 17:26:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.162 { 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme$subsystem", 00:23:59.162 "trtype": "$TEST_TRANSPORT", 00:23:59.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "$NVMF_PORT", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.162 "hdgst": ${hdgst:-false}, 00:23:59.162 "ddgst": ${ddgst:-false} 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 } 00:23:59.162 EOF 00:23:59.162 )") 00:23:59.162 17:26:55 -- nvmf/common.sh@542 -- # cat 00:23:59.162 17:26:55 -- nvmf/common.sh@544 -- # jq . 00:23:59.162 17:26:55 -- nvmf/common.sh@545 -- # IFS=, 00:23:59.162 17:26:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme1", 00:23:59.162 "trtype": "rdma", 00:23:59.162 "traddr": "192.168.100.8", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "4420", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.162 "hdgst": false, 00:23:59.162 "ddgst": false 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 },{ 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme2", 00:23:59.162 "trtype": "rdma", 00:23:59.162 "traddr": "192.168.100.8", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "4420", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:59.162 "hdgst": false, 00:23:59.162 "ddgst": false 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 },{ 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme3", 00:23:59.162 "trtype": "rdma", 00:23:59.162 "traddr": "192.168.100.8", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "4420", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:59.162 "hdgst": false, 00:23:59.162 "ddgst": false 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 },{ 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme4", 00:23:59.162 "trtype": "rdma", 00:23:59.162 "traddr": "192.168.100.8", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "4420", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:59.162 "hdgst": false, 00:23:59.162 "ddgst": false 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 },{ 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme5", 00:23:59.162 "trtype": "rdma", 00:23:59.162 "traddr": "192.168.100.8", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "4420", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:59.162 "hdgst": false, 00:23:59.162 "ddgst": false 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 },{ 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme6", 00:23:59.162 "trtype": "rdma", 00:23:59.162 "traddr": "192.168.100.8", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "4420", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:59.162 "hdgst": false, 00:23:59.162 "ddgst": false 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 },{ 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme7", 00:23:59.162 "trtype": "rdma", 00:23:59.162 "traddr": "192.168.100.8", 00:23:59.162 "adrfam": "ipv4", 00:23:59.162 "trsvcid": "4420", 00:23:59.162 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:59.162 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:59.162 "hdgst": false, 00:23:59.162 "ddgst": false 00:23:59.162 }, 00:23:59.162 "method": "bdev_nvme_attach_controller" 00:23:59.162 },{ 00:23:59.162 "params": { 00:23:59.162 "name": "Nvme8", 00:23:59.162 "trtype": "rdma", 00:23:59.162 "traddr": "192.168.100.8", 00:23:59.162 "adrfam": "ipv4", 00:23:59.163 "trsvcid": "4420", 00:23:59.163 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:59.163 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:59.163 "hdgst": false, 00:23:59.163 "ddgst": false 00:23:59.163 }, 00:23:59.163 "method": "bdev_nvme_attach_controller" 00:23:59.163 },{ 00:23:59.163 "params": { 00:23:59.163 "name": "Nvme9", 00:23:59.163 "trtype": "rdma", 00:23:59.163 "traddr": "192.168.100.8", 00:23:59.163 "adrfam": "ipv4", 00:23:59.163 "trsvcid": "4420", 00:23:59.163 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:59.163 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:59.163 "hdgst": false, 00:23:59.163 "ddgst": false 00:23:59.163 }, 00:23:59.163 "method": "bdev_nvme_attach_controller" 00:23:59.163 },{ 00:23:59.163 "params": { 00:23:59.163 "name": "Nvme10", 00:23:59.163 "trtype": "rdma", 00:23:59.163 "traddr": "192.168.100.8", 00:23:59.163 "adrfam": "ipv4", 00:23:59.163 "trsvcid": "4420", 00:23:59.163 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:59.163 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:59.163 "hdgst": false, 00:23:59.163 "ddgst": false 00:23:59.163 }, 00:23:59.163 "method": "bdev_nvme_attach_controller" 00:23:59.163 }' 00:23:59.163 [2024-12-14 17:26:55.732561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.163 [2024-12-14 17:26:55.768691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.540 17:26:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.540 17:26:57 -- common/autotest_common.sh@862 -- # return 0 00:24:00.540 17:26:57 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:00.540 17:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.540 17:26:57 -- common/autotest_common.sh@10 -- # set +x 00:24:00.540 17:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.540 17:26:57 -- target/shutdown.sh@83 -- # kill -9 1434582 00:24:00.540 17:26:57 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:00.540 17:26:57 -- target/shutdown.sh@87 -- # sleep 1 00:24:01.920 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1434582 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:01.920 17:26:58 -- target/shutdown.sh@88 -- # kill -0 1434268 00:24:01.920 17:26:58 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:01.920 17:26:58 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:01.920 17:26:58 -- nvmf/common.sh@520 -- # config=() 00:24:01.920 17:26:58 -- nvmf/common.sh@520 -- # local subsystem config 00:24:01.920 17:26:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:01.920 { 00:24:01.920 "params": { 00:24:01.920 "name": "Nvme$subsystem", 00:24:01.920 "trtype": "$TEST_TRANSPORT", 00:24:01.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.920 "adrfam": "ipv4", 00:24:01.920 "trsvcid": "$NVMF_PORT", 00:24:01.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.920 "hdgst": ${hdgst:-false}, 00:24:01.920 "ddgst": ${ddgst:-false} 00:24:01.920 }, 00:24:01.920 "method": "bdev_nvme_attach_controller" 00:24:01.920 } 00:24:01.920 EOF 00:24:01.920 )") 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # cat 00:24:01.920 17:26:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:01.920 { 00:24:01.920 "params": { 00:24:01.920 "name": "Nvme$subsystem", 00:24:01.920 "trtype": "$TEST_TRANSPORT", 00:24:01.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.920 "adrfam": "ipv4", 00:24:01.920 "trsvcid": "$NVMF_PORT", 00:24:01.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.920 "hdgst": ${hdgst:-false}, 00:24:01.920 "ddgst": ${ddgst:-false} 00:24:01.920 }, 00:24:01.920 "method": "bdev_nvme_attach_controller" 00:24:01.920 } 00:24:01.920 EOF 00:24:01.920 )") 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # cat 00:24:01.920 17:26:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:01.920 { 00:24:01.920 "params": { 00:24:01.920 "name": "Nvme$subsystem", 00:24:01.920 "trtype": "$TEST_TRANSPORT", 00:24:01.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.920 "adrfam": "ipv4", 00:24:01.920 "trsvcid": "$NVMF_PORT", 00:24:01.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.920 "hdgst": ${hdgst:-false}, 00:24:01.920 "ddgst": ${ddgst:-false} 00:24:01.920 }, 00:24:01.920 "method": "bdev_nvme_attach_controller" 00:24:01.920 } 00:24:01.920 EOF 00:24:01.920 )") 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # cat 00:24:01.920 17:26:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:01.920 { 00:24:01.920 "params": { 00:24:01.920 "name": "Nvme$subsystem", 00:24:01.920 "trtype": "$TEST_TRANSPORT", 00:24:01.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.920 "adrfam": "ipv4", 00:24:01.920 "trsvcid": "$NVMF_PORT", 00:24:01.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.920 "hdgst": ${hdgst:-false}, 00:24:01.920 "ddgst": ${ddgst:-false} 00:24:01.920 }, 00:24:01.920 "method": "bdev_nvme_attach_controller" 00:24:01.920 } 00:24:01.920 EOF 00:24:01.920 )") 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # cat 00:24:01.920 17:26:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:01.920 { 00:24:01.920 "params": { 00:24:01.920 "name": "Nvme$subsystem", 00:24:01.920 "trtype": "$TEST_TRANSPORT", 00:24:01.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.920 "adrfam": "ipv4", 00:24:01.920 "trsvcid": "$NVMF_PORT", 00:24:01.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.920 "hdgst": ${hdgst:-false}, 00:24:01.920 "ddgst": ${ddgst:-false} 00:24:01.920 }, 00:24:01.920 "method": "bdev_nvme_attach_controller" 00:24:01.920 } 00:24:01.920 EOF 00:24:01.920 )") 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # cat 00:24:01.920 17:26:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:01.920 { 00:24:01.920 "params": { 00:24:01.920 "name": "Nvme$subsystem", 00:24:01.920 "trtype": "$TEST_TRANSPORT", 00:24:01.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.920 "adrfam": "ipv4", 00:24:01.920 "trsvcid": "$NVMF_PORT", 00:24:01.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.920 "hdgst": ${hdgst:-false}, 00:24:01.920 "ddgst": ${ddgst:-false} 00:24:01.920 }, 00:24:01.920 "method": "bdev_nvme_attach_controller" 00:24:01.920 } 00:24:01.920 EOF 00:24:01.920 )") 00:24:01.920 [2024-12-14 17:26:58.219345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:01.920 [2024-12-14 17:26:58.219399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435146 ] 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # cat 00:24:01.920 17:26:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:01.920 { 00:24:01.920 "params": { 00:24:01.920 "name": "Nvme$subsystem", 00:24:01.920 "trtype": "$TEST_TRANSPORT", 00:24:01.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.920 "adrfam": "ipv4", 00:24:01.920 "trsvcid": "$NVMF_PORT", 00:24:01.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.920 "hdgst": ${hdgst:-false}, 00:24:01.920 "ddgst": ${ddgst:-false} 00:24:01.920 }, 00:24:01.920 "method": "bdev_nvme_attach_controller" 00:24:01.920 } 00:24:01.920 EOF 00:24:01.920 )") 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # cat 00:24:01.920 17:26:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:01.920 17:26:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:01.920 { 00:24:01.920 "params": { 00:24:01.920 "name": "Nvme$subsystem", 00:24:01.920 "trtype": "$TEST_TRANSPORT", 00:24:01.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.920 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "$NVMF_PORT", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.921 "hdgst": ${hdgst:-false}, 00:24:01.921 "ddgst": ${ddgst:-false} 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 } 00:24:01.921 EOF 00:24:01.921 )") 00:24:01.921 17:26:58 -- nvmf/common.sh@542 -- # cat 00:24:01.921 17:26:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:01.921 17:26:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:01.921 { 00:24:01.921 "params": { 00:24:01.921 "name": "Nvme$subsystem", 00:24:01.921 "trtype": "$TEST_TRANSPORT", 00:24:01.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.921 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "$NVMF_PORT", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.921 "hdgst": ${hdgst:-false}, 00:24:01.921 "ddgst": ${ddgst:-false} 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 } 00:24:01.921 EOF 00:24:01.921 )") 00:24:01.921 17:26:58 -- nvmf/common.sh@542 -- # cat 00:24:01.921 17:26:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:01.921 17:26:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:01.921 { 00:24:01.921 "params": { 00:24:01.921 "name": "Nvme$subsystem", 00:24:01.921 "trtype": "$TEST_TRANSPORT", 00:24:01.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.921 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "$NVMF_PORT", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.921 "hdgst": ${hdgst:-false}, 00:24:01.921 "ddgst": ${ddgst:-false} 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 } 00:24:01.921 EOF 00:24:01.921 )") 00:24:01.921 17:26:58 -- nvmf/common.sh@542 -- # cat 00:24:01.921 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.921 17:26:58 -- nvmf/common.sh@544 -- # jq . 00:24:01.921 17:26:58 -- nvmf/common.sh@545 -- # IFS=, 00:24:01.921 17:26:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:01.921 "params": { 00:24:01.921 "name": "Nvme1", 00:24:01.921 "trtype": "rdma", 00:24:01.921 "traddr": "192.168.100.8", 00:24:01.921 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "4420", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.921 "hdgst": false, 00:24:01.921 "ddgst": false 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 },{ 00:24:01.921 "params": { 00:24:01.921 "name": "Nvme2", 00:24:01.921 "trtype": "rdma", 00:24:01.921 "traddr": "192.168.100.8", 00:24:01.921 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "4420", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:01.921 "hdgst": false, 00:24:01.921 "ddgst": false 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 },{ 00:24:01.921 "params": { 00:24:01.921 "name": "Nvme3", 00:24:01.921 "trtype": "rdma", 00:24:01.921 "traddr": "192.168.100.8", 00:24:01.921 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "4420", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:01.921 "hdgst": false, 00:24:01.921 "ddgst": false 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 },{ 00:24:01.921 "params": { 00:24:01.921 "name": "Nvme4", 00:24:01.921 "trtype": "rdma", 00:24:01.921 "traddr": "192.168.100.8", 00:24:01.921 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "4420", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:01.921 "hdgst": false, 00:24:01.921 "ddgst": false 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 },{ 00:24:01.921 "params": { 00:24:01.921 "name": "Nvme5", 00:24:01.921 "trtype": "rdma", 00:24:01.921 "traddr": "192.168.100.8", 00:24:01.921 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "4420", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:01.921 "hdgst": false, 00:24:01.921 "ddgst": false 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 },{ 00:24:01.921 "params": { 00:24:01.921 "name": "Nvme6", 00:24:01.921 "trtype": "rdma", 00:24:01.921 "traddr": "192.168.100.8", 00:24:01.921 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "4420", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:01.921 "hdgst": false, 00:24:01.921 "ddgst": false 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 },{ 00:24:01.921 "params": { 00:24:01.921 "name": "Nvme7", 00:24:01.921 "trtype": "rdma", 00:24:01.921 "traddr": "192.168.100.8", 00:24:01.921 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "4420", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:01.921 "hdgst": false, 00:24:01.921 "ddgst": false 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 },{ 00:24:01.921 "params": { 00:24:01.921 "name": "Nvme8", 00:24:01.921 "trtype": "rdma", 00:24:01.921 "traddr": "192.168.100.8", 00:24:01.921 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "4420", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:01.921 "hdgst": false, 00:24:01.921 "ddgst": false 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 },{ 00:24:01.921 "params": { 00:24:01.921 "name": "Nvme9", 00:24:01.921 "trtype": "rdma", 00:24:01.921 "traddr": "192.168.100.8", 00:24:01.921 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "4420", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:01.921 "hdgst": false, 00:24:01.921 "ddgst": false 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 },{ 00:24:01.921 "params": { 00:24:01.921 "name": "Nvme10", 00:24:01.921 "trtype": "rdma", 00:24:01.921 "traddr": "192.168.100.8", 00:24:01.921 "adrfam": "ipv4", 00:24:01.921 "trsvcid": "4420", 00:24:01.921 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:01.921 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:01.921 "hdgst": false, 00:24:01.921 "ddgst": false 00:24:01.921 }, 00:24:01.921 "method": "bdev_nvme_attach_controller" 00:24:01.921 }' 00:24:01.921 [2024-12-14 17:26:58.294134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.921 [2024-12-14 17:26:58.331187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.858 Running I/O for 1 seconds... 00:24:03.796 00:24:03.796 Latency(us) 00:24:03.796 [2024-12-14T16:27:00.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.796 [2024-12-14T16:27:00.480Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.796 Verification LBA range: start 0x0 length 0x400 00:24:03.796 Nvme1n1 : 1.10 733.20 45.83 0.00 0.00 86347.77 7392.46 120795.96 00:24:03.796 [2024-12-14T16:27:00.480Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.796 Verification LBA range: start 0x0 length 0x400 00:24:03.796 Nvme2n1 : 1.11 749.72 46.86 0.00 0.00 83820.56 7654.60 76336.33 00:24:03.796 [2024-12-14T16:27:00.480Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.796 Verification LBA range: start 0x0 length 0x400 00:24:03.796 Nvme3n1 : 1.11 745.44 46.59 0.00 0.00 83822.53 7864.32 74658.61 00:24:03.796 [2024-12-14T16:27:00.480Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.796 Verification LBA range: start 0x0 length 0x400 00:24:03.796 Nvme4n1 : 1.11 750.18 46.89 0.00 0.00 82816.65 8074.04 72142.03 00:24:03.796 [2024-12-14T16:27:00.480Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.796 Verification LBA range: start 0x0 length 0x400 00:24:03.796 Nvme5n1 : 1.11 744.10 46.51 0.00 0.00 83001.69 8283.75 69625.45 00:24:03.796 [2024-12-14T16:27:00.480Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.796 Verification LBA range: start 0x0 length 0x400 00:24:03.796 Nvme6n1 : 1.11 743.43 46.46 0.00 0.00 82578.63 8441.04 69625.45 00:24:03.796 [2024-12-14T16:27:00.480Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.796 Verification LBA range: start 0x0 length 0x400 00:24:03.796 Nvme7n1 : 1.11 742.76 46.42 0.00 0.00 82151.13 8650.75 72142.03 00:24:03.796 [2024-12-14T16:27:00.480Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.796 Verification LBA range: start 0x0 length 0x400 00:24:03.796 Nvme8n1 : 1.11 742.10 46.38 0.00 0.00 81720.39 8860.47 74239.18 00:24:03.796 [2024-12-14T16:27:00.480Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.796 Verification LBA range: start 0x0 length 0x400 00:24:03.796 Nvme9n1 : 1.11 741.44 46.34 0.00 0.00 81303.88 9070.18 76336.33 00:24:03.796 [2024-12-14T16:27:00.480Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.796 Verification LBA range: start 0x0 length 0x400 00:24:03.796 Nvme10n1 : 1.11 547.78 34.24 0.00 0.00 109218.04 7654.60 335544.32 00:24:03.796 [2024-12-14T16:27:00.480Z] =================================================================================================================== 00:24:03.796 [2024-12-14T16:27:00.480Z] Total : 7240.16 452.51 0.00 0.00 85043.54 7392.46 335544.32 00:24:04.056 17:27:00 -- target/shutdown.sh@93 -- # stoptarget 00:24:04.056 17:27:00 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:04.056 17:27:00 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:04.056 17:27:00 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:04.056 17:27:00 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:04.056 17:27:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:04.056 17:27:00 -- nvmf/common.sh@116 -- # sync 00:24:04.056 17:27:00 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:04.056 17:27:00 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:04.056 17:27:00 -- nvmf/common.sh@119 -- # set +e 00:24:04.056 17:27:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:04.056 17:27:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:04.056 rmmod nvme_rdma 00:24:04.056 rmmod nvme_fabrics 00:24:04.056 17:27:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:04.056 17:27:00 -- nvmf/common.sh@123 -- # set -e 00:24:04.056 17:27:00 -- nvmf/common.sh@124 -- # return 0 00:24:04.056 17:27:00 -- nvmf/common.sh@477 -- # '[' -n 1434268 ']' 00:24:04.056 17:27:00 -- nvmf/common.sh@478 -- # killprocess 1434268 00:24:04.056 17:27:00 -- common/autotest_common.sh@936 -- # '[' -z 1434268 ']' 00:24:04.056 17:27:00 -- common/autotest_common.sh@940 -- # kill -0 1434268 00:24:04.056 17:27:00 -- common/autotest_common.sh@941 -- # uname 00:24:04.056 17:27:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:04.056 17:27:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1434268 00:24:04.056 17:27:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:04.056 17:27:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:04.056 17:27:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1434268' 00:24:04.056 killing process with pid 1434268 00:24:04.056 17:27:00 -- common/autotest_common.sh@955 -- # kill 1434268 00:24:04.056 17:27:00 -- common/autotest_common.sh@960 -- # wait 1434268 00:24:04.625 17:27:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:04.625 17:27:01 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:04.625 00:24:04.625 real 0m13.743s 00:24:04.625 user 0m33.290s 00:24:04.625 sys 0m6.142s 00:24:04.625 17:27:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:04.625 17:27:01 -- common/autotest_common.sh@10 -- # set +x 00:24:04.625 ************************************ 00:24:04.625 END TEST nvmf_shutdown_tc1 00:24:04.625 ************************************ 00:24:04.625 17:27:01 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:04.625 17:27:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:04.625 17:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:04.625 17:27:01 -- common/autotest_common.sh@10 -- # set +x 00:24:04.625 ************************************ 00:24:04.625 START TEST nvmf_shutdown_tc2 00:24:04.625 ************************************ 00:24:04.625 17:27:01 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc2 00:24:04.625 17:27:01 -- target/shutdown.sh@98 -- # starttarget 00:24:04.625 17:27:01 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:04.625 17:27:01 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:04.625 17:27:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.625 17:27:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:04.625 17:27:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:04.625 17:27:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:04.625 17:27:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.625 17:27:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.625 17:27:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.625 17:27:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:04.625 17:27:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:04.625 17:27:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:04.625 17:27:01 -- common/autotest_common.sh@10 -- # set +x 00:24:04.625 17:27:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:04.625 17:27:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:04.625 17:27:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:04.625 17:27:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:04.625 17:27:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:04.625 17:27:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:04.625 17:27:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:04.625 17:27:01 -- nvmf/common.sh@294 -- # net_devs=() 00:24:04.625 17:27:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:04.625 17:27:01 -- nvmf/common.sh@295 -- # e810=() 00:24:04.625 17:27:01 -- nvmf/common.sh@295 -- # local -ga e810 00:24:04.625 17:27:01 -- nvmf/common.sh@296 -- # x722=() 00:24:04.625 17:27:01 -- nvmf/common.sh@296 -- # local -ga x722 00:24:04.625 17:27:01 -- nvmf/common.sh@297 -- # mlx=() 00:24:04.625 17:27:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:04.625 17:27:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.625 17:27:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.625 17:27:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.625 17:27:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.625 17:27:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.625 17:27:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.625 17:27:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.626 17:27:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.626 17:27:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.626 17:27:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.626 17:27:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.626 17:27:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:04.626 17:27:01 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:04.626 17:27:01 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:04.626 17:27:01 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:04.626 17:27:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:04.626 17:27:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:04.626 17:27:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:04.626 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:04.626 17:27:01 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:04.626 17:27:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:04.626 17:27:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:04.626 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:04.626 17:27:01 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:04.626 17:27:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:04.626 17:27:01 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:04.626 17:27:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.626 17:27:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:04.626 17:27:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.626 17:27:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:04.626 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:04.626 17:27:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.626 17:27:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:04.626 17:27:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.626 17:27:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:04.626 17:27:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.626 17:27:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:04.626 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:04.626 17:27:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.626 17:27:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:04.626 17:27:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:04.626 17:27:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:04.626 17:27:01 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:04.626 17:27:01 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:04.626 17:27:01 -- nvmf/common.sh@57 -- # uname 00:24:04.626 17:27:01 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:04.626 17:27:01 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:04.626 17:27:01 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:04.626 17:27:01 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:04.626 17:27:01 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:04.626 17:27:01 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:04.626 17:27:01 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:04.626 17:27:01 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:04.626 17:27:01 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:04.626 17:27:01 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:04.626 17:27:01 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:04.626 17:27:01 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:04.626 17:27:01 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:04.626 17:27:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:04.626 17:27:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:04.886 17:27:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:04.886 17:27:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:04.886 17:27:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.886 17:27:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:04.886 17:27:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:04.886 17:27:01 -- nvmf/common.sh@104 -- # continue 2 00:24:04.886 17:27:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:04.886 17:27:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.886 17:27:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:04.886 17:27:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.886 17:27:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:04.886 17:27:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:04.886 17:27:01 -- nvmf/common.sh@104 -- # continue 2 00:24:04.886 17:27:01 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:04.886 17:27:01 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:04.886 17:27:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:04.886 17:27:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:04.886 17:27:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:04.886 17:27:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:04.886 17:27:01 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:04.886 17:27:01 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:04.886 17:27:01 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:04.886 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:04.886 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:04.886 altname enp217s0f0np0 00:24:04.886 altname ens818f0np0 00:24:04.886 inet 192.168.100.8/24 scope global mlx_0_0 00:24:04.886 valid_lft forever preferred_lft forever 00:24:04.886 17:27:01 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:04.886 17:27:01 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:04.886 17:27:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:04.886 17:27:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:04.886 17:27:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:04.886 17:27:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:04.886 17:27:01 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:04.886 17:27:01 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:04.886 17:27:01 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:04.886 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:04.886 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:04.886 altname enp217s0f1np1 00:24:04.886 altname ens818f1np1 00:24:04.886 inet 192.168.100.9/24 scope global mlx_0_1 00:24:04.886 valid_lft forever preferred_lft forever 00:24:04.886 17:27:01 -- nvmf/common.sh@410 -- # return 0 00:24:04.886 17:27:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:04.886 17:27:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:04.886 17:27:01 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:04.886 17:27:01 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:04.886 17:27:01 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:04.886 17:27:01 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:04.886 17:27:01 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:04.886 17:27:01 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:04.886 17:27:01 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:04.886 17:27:01 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:04.886 17:27:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:04.886 17:27:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.886 17:27:01 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:04.886 17:27:01 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:04.886 17:27:01 -- nvmf/common.sh@104 -- # continue 2 00:24:04.886 17:27:01 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:04.886 17:27:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.886 17:27:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:04.886 17:27:01 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.886 17:27:01 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:04.886 17:27:01 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:04.886 17:27:01 -- nvmf/common.sh@104 -- # continue 2 00:24:04.886 17:27:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:04.886 17:27:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:04.886 17:27:01 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:04.886 17:27:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:04.886 17:27:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:04.886 17:27:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:04.886 17:27:01 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:04.886 17:27:01 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:04.886 17:27:01 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:04.886 17:27:01 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:04.886 17:27:01 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:04.886 17:27:01 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:04.886 17:27:01 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:04.886 192.168.100.9' 00:24:04.886 17:27:01 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:04.886 192.168.100.9' 00:24:04.886 17:27:01 -- nvmf/common.sh@445 -- # head -n 1 00:24:04.886 17:27:01 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:04.886 17:27:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:04.886 192.168.100.9' 00:24:04.886 17:27:01 -- nvmf/common.sh@446 -- # tail -n +2 00:24:04.886 17:27:01 -- nvmf/common.sh@446 -- # head -n 1 00:24:04.886 17:27:01 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:04.886 17:27:01 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:04.886 17:27:01 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:04.886 17:27:01 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:04.886 17:27:01 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:04.886 17:27:01 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:04.886 17:27:01 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:04.886 17:27:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:04.886 17:27:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:04.886 17:27:01 -- common/autotest_common.sh@10 -- # set +x 00:24:04.886 17:27:01 -- nvmf/common.sh@469 -- # nvmfpid=1435894 00:24:04.886 17:27:01 -- nvmf/common.sh@470 -- # waitforlisten 1435894 00:24:04.886 17:27:01 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:04.886 17:27:01 -- common/autotest_common.sh@829 -- # '[' -z 1435894 ']' 00:24:04.886 17:27:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.886 17:27:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.886 17:27:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.886 17:27:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.886 17:27:01 -- common/autotest_common.sh@10 -- # set +x 00:24:04.886 [2024-12-14 17:27:01.513276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:04.886 [2024-12-14 17:27:01.513331] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.886 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.146 [2024-12-14 17:27:01.584165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.146 [2024-12-14 17:27:01.621185] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:05.146 [2024-12-14 17:27:01.621300] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.146 [2024-12-14 17:27:01.621310] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.146 [2024-12-14 17:27:01.621318] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.146 [2024-12-14 17:27:01.621428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.146 [2024-12-14 17:27:01.621517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.146 [2024-12-14 17:27:01.621626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.146 [2024-12-14 17:27:01.621628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:05.714 17:27:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.714 17:27:02 -- common/autotest_common.sh@862 -- # return 0 00:24:05.714 17:27:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:05.714 17:27:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:05.714 17:27:02 -- common/autotest_common.sh@10 -- # set +x 00:24:05.714 17:27:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.714 17:27:02 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:05.714 17:27:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.714 17:27:02 -- common/autotest_common.sh@10 -- # set +x 00:24:05.974 [2024-12-14 17:27:02.409784] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x155e3c0/0x1562890) succeed. 00:24:05.974 [2024-12-14 17:27:02.418911] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x155f960/0x15a3f30) succeed. 00:24:05.974 17:27:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.974 17:27:02 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:05.974 17:27:02 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:05.974 17:27:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:05.974 17:27:02 -- common/autotest_common.sh@10 -- # set +x 00:24:05.974 17:27:02 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:05.974 17:27:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.974 17:27:02 -- target/shutdown.sh@28 -- # cat 00:24:05.974 17:27:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.974 17:27:02 -- target/shutdown.sh@28 -- # cat 00:24:05.974 17:27:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.974 17:27:02 -- target/shutdown.sh@28 -- # cat 00:24:05.974 17:27:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.974 17:27:02 -- target/shutdown.sh@28 -- # cat 00:24:05.974 17:27:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.974 17:27:02 -- target/shutdown.sh@28 -- # cat 00:24:05.974 17:27:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.974 17:27:02 -- target/shutdown.sh@28 -- # cat 00:24:05.974 17:27:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.974 17:27:02 -- target/shutdown.sh@28 -- # cat 00:24:05.974 17:27:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.974 17:27:02 -- target/shutdown.sh@28 -- # cat 00:24:05.974 17:27:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.974 17:27:02 -- target/shutdown.sh@28 -- # cat 00:24:05.974 17:27:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.974 17:27:02 -- target/shutdown.sh@28 -- # cat 00:24:05.974 17:27:02 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:05.974 17:27:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.974 17:27:02 -- common/autotest_common.sh@10 -- # set +x 00:24:05.974 Malloc1 00:24:05.974 [2024-12-14 17:27:02.644919] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:06.232 Malloc2 00:24:06.232 Malloc3 00:24:06.232 Malloc4 00:24:06.232 Malloc5 00:24:06.232 Malloc6 00:24:06.232 Malloc7 00:24:06.492 Malloc8 00:24:06.492 Malloc9 00:24:06.492 Malloc10 00:24:06.492 17:27:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.492 17:27:03 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:06.492 17:27:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:06.492 17:27:03 -- common/autotest_common.sh@10 -- # set +x 00:24:06.492 17:27:03 -- target/shutdown.sh@102 -- # perfpid=1436231 00:24:06.492 17:27:03 -- target/shutdown.sh@103 -- # waitforlisten 1436231 /var/tmp/bdevperf.sock 00:24:06.492 17:27:03 -- common/autotest_common.sh@829 -- # '[' -z 1436231 ']' 00:24:06.492 17:27:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.492 17:27:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.492 17:27:03 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:06.492 17:27:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.492 17:27:03 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:06.492 17:27:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.492 17:27:03 -- nvmf/common.sh@520 -- # config=() 00:24:06.492 17:27:03 -- common/autotest_common.sh@10 -- # set +x 00:24:06.492 17:27:03 -- nvmf/common.sh@520 -- # local subsystem config 00:24:06.492 17:27:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.492 17:27:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.492 { 00:24:06.492 "params": { 00:24:06.492 "name": "Nvme$subsystem", 00:24:06.492 "trtype": "$TEST_TRANSPORT", 00:24:06.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.492 "adrfam": "ipv4", 00:24:06.492 "trsvcid": "$NVMF_PORT", 00:24:06.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.492 "hdgst": ${hdgst:-false}, 00:24:06.492 "ddgst": ${ddgst:-false} 00:24:06.492 }, 00:24:06.492 "method": "bdev_nvme_attach_controller" 00:24:06.492 } 00:24:06.492 EOF 00:24:06.492 )") 00:24:06.492 17:27:03 -- nvmf/common.sh@542 -- # cat 00:24:06.492 17:27:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.492 17:27:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.492 { 00:24:06.492 "params": { 00:24:06.492 "name": "Nvme$subsystem", 00:24:06.492 "trtype": "$TEST_TRANSPORT", 00:24:06.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.492 "adrfam": "ipv4", 00:24:06.492 "trsvcid": "$NVMF_PORT", 00:24:06.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.492 "hdgst": ${hdgst:-false}, 00:24:06.492 "ddgst": ${ddgst:-false} 00:24:06.492 }, 00:24:06.492 "method": "bdev_nvme_attach_controller" 00:24:06.492 } 00:24:06.492 EOF 00:24:06.492 )") 00:24:06.492 17:27:03 -- nvmf/common.sh@542 -- # cat 00:24:06.492 17:27:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.492 17:27:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.492 { 00:24:06.492 "params": { 00:24:06.492 "name": "Nvme$subsystem", 00:24:06.492 "trtype": "$TEST_TRANSPORT", 00:24:06.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.492 "adrfam": "ipv4", 00:24:06.492 "trsvcid": "$NVMF_PORT", 00:24:06.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.492 "hdgst": ${hdgst:-false}, 00:24:06.492 "ddgst": ${ddgst:-false} 00:24:06.492 }, 00:24:06.492 "method": "bdev_nvme_attach_controller" 00:24:06.492 } 00:24:06.492 EOF 00:24:06.492 )") 00:24:06.492 17:27:03 -- nvmf/common.sh@542 -- # cat 00:24:06.492 17:27:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.492 17:27:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.492 { 00:24:06.492 "params": { 00:24:06.492 "name": "Nvme$subsystem", 00:24:06.492 "trtype": "$TEST_TRANSPORT", 00:24:06.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.492 "adrfam": "ipv4", 00:24:06.492 "trsvcid": "$NVMF_PORT", 00:24:06.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.492 "hdgst": ${hdgst:-false}, 00:24:06.492 "ddgst": ${ddgst:-false} 00:24:06.492 }, 00:24:06.492 "method": "bdev_nvme_attach_controller" 00:24:06.492 } 00:24:06.492 EOF 00:24:06.492 )") 00:24:06.492 17:27:03 -- nvmf/common.sh@542 -- # cat 00:24:06.492 17:27:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.492 17:27:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.492 { 00:24:06.492 "params": { 00:24:06.492 "name": "Nvme$subsystem", 00:24:06.492 "trtype": "$TEST_TRANSPORT", 00:24:06.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.492 "adrfam": "ipv4", 00:24:06.492 "trsvcid": "$NVMF_PORT", 00:24:06.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.492 "hdgst": ${hdgst:-false}, 00:24:06.492 "ddgst": ${ddgst:-false} 00:24:06.492 }, 00:24:06.492 "method": "bdev_nvme_attach_controller" 00:24:06.492 } 00:24:06.492 EOF 00:24:06.492 )") 00:24:06.492 17:27:03 -- nvmf/common.sh@542 -- # cat 00:24:06.492 [2024-12-14 17:27:03.130547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:06.492 [2024-12-14 17:27:03.130601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436231 ] 00:24:06.492 17:27:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.492 17:27:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.492 { 00:24:06.492 "params": { 00:24:06.492 "name": "Nvme$subsystem", 00:24:06.492 "trtype": "$TEST_TRANSPORT", 00:24:06.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.492 "adrfam": "ipv4", 00:24:06.492 "trsvcid": "$NVMF_PORT", 00:24:06.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.493 "hdgst": ${hdgst:-false}, 00:24:06.493 "ddgst": ${ddgst:-false} 00:24:06.493 }, 00:24:06.493 "method": "bdev_nvme_attach_controller" 00:24:06.493 } 00:24:06.493 EOF 00:24:06.493 )") 00:24:06.493 17:27:03 -- nvmf/common.sh@542 -- # cat 00:24:06.493 17:27:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.493 17:27:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.493 { 00:24:06.493 "params": { 00:24:06.493 "name": "Nvme$subsystem", 00:24:06.493 "trtype": "$TEST_TRANSPORT", 00:24:06.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.493 "adrfam": "ipv4", 00:24:06.493 "trsvcid": "$NVMF_PORT", 00:24:06.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.493 "hdgst": ${hdgst:-false}, 00:24:06.493 "ddgst": ${ddgst:-false} 00:24:06.493 }, 00:24:06.493 "method": "bdev_nvme_attach_controller" 00:24:06.493 } 00:24:06.493 EOF 00:24:06.493 )") 00:24:06.493 17:27:03 -- nvmf/common.sh@542 -- # cat 00:24:06.493 17:27:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.493 17:27:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.493 { 00:24:06.493 "params": { 00:24:06.493 "name": "Nvme$subsystem", 00:24:06.493 "trtype": "$TEST_TRANSPORT", 00:24:06.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.493 "adrfam": "ipv4", 00:24:06.493 "trsvcid": "$NVMF_PORT", 00:24:06.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.493 "hdgst": ${hdgst:-false}, 00:24:06.493 "ddgst": ${ddgst:-false} 00:24:06.493 }, 00:24:06.493 "method": "bdev_nvme_attach_controller" 00:24:06.493 } 00:24:06.493 EOF 00:24:06.493 )") 00:24:06.493 17:27:03 -- nvmf/common.sh@542 -- # cat 00:24:06.493 17:27:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.493 17:27:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.493 { 00:24:06.493 "params": { 00:24:06.493 "name": "Nvme$subsystem", 00:24:06.493 "trtype": "$TEST_TRANSPORT", 00:24:06.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.493 "adrfam": "ipv4", 00:24:06.493 "trsvcid": "$NVMF_PORT", 00:24:06.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.493 "hdgst": ${hdgst:-false}, 00:24:06.493 "ddgst": ${ddgst:-false} 00:24:06.493 }, 00:24:06.493 "method": "bdev_nvme_attach_controller" 00:24:06.493 } 00:24:06.493 EOF 00:24:06.493 )") 00:24:06.493 17:27:03 -- nvmf/common.sh@542 -- # cat 00:24:06.493 17:27:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.493 17:27:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.493 { 00:24:06.493 "params": { 00:24:06.493 "name": "Nvme$subsystem", 00:24:06.493 "trtype": "$TEST_TRANSPORT", 00:24:06.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.493 "adrfam": "ipv4", 00:24:06.493 "trsvcid": "$NVMF_PORT", 00:24:06.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.493 "hdgst": ${hdgst:-false}, 00:24:06.493 "ddgst": ${ddgst:-false} 00:24:06.493 }, 00:24:06.493 "method": "bdev_nvme_attach_controller" 00:24:06.493 } 00:24:06.493 EOF 00:24:06.493 )") 00:24:06.493 17:27:03 -- nvmf/common.sh@542 -- # cat 00:24:06.493 17:27:03 -- nvmf/common.sh@544 -- # jq . 00:24:06.753 17:27:03 -- nvmf/common.sh@545 -- # IFS=, 00:24:06.753 17:27:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:06.753 "params": { 00:24:06.753 "name": "Nvme1", 00:24:06.753 "trtype": "rdma", 00:24:06.753 "traddr": "192.168.100.8", 00:24:06.753 "adrfam": "ipv4", 00:24:06.753 "trsvcid": "4420", 00:24:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.753 "hdgst": false, 00:24:06.753 "ddgst": false 00:24:06.753 }, 00:24:06.753 "method": "bdev_nvme_attach_controller" 00:24:06.753 },{ 00:24:06.753 "params": { 00:24:06.753 "name": "Nvme2", 00:24:06.753 "trtype": "rdma", 00:24:06.753 "traddr": "192.168.100.8", 00:24:06.753 "adrfam": "ipv4", 00:24:06.753 "trsvcid": "4420", 00:24:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:06.753 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:06.753 "hdgst": false, 00:24:06.753 "ddgst": false 00:24:06.753 }, 00:24:06.753 "method": "bdev_nvme_attach_controller" 00:24:06.753 },{ 00:24:06.753 "params": { 00:24:06.753 "name": "Nvme3", 00:24:06.753 "trtype": "rdma", 00:24:06.753 "traddr": "192.168.100.8", 00:24:06.753 "adrfam": "ipv4", 00:24:06.753 "trsvcid": "4420", 00:24:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:06.753 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:06.753 "hdgst": false, 00:24:06.753 "ddgst": false 00:24:06.753 }, 00:24:06.753 "method": "bdev_nvme_attach_controller" 00:24:06.753 },{ 00:24:06.753 "params": { 00:24:06.753 "name": "Nvme4", 00:24:06.753 "trtype": "rdma", 00:24:06.753 "traddr": "192.168.100.8", 00:24:06.753 "adrfam": "ipv4", 00:24:06.753 "trsvcid": "4420", 00:24:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:06.753 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:06.753 "hdgst": false, 00:24:06.753 "ddgst": false 00:24:06.753 }, 00:24:06.753 "method": "bdev_nvme_attach_controller" 00:24:06.753 },{ 00:24:06.753 "params": { 00:24:06.753 "name": "Nvme5", 00:24:06.753 "trtype": "rdma", 00:24:06.753 "traddr": "192.168.100.8", 00:24:06.753 "adrfam": "ipv4", 00:24:06.753 "trsvcid": "4420", 00:24:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:06.753 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:06.753 "hdgst": false, 00:24:06.753 "ddgst": false 00:24:06.753 }, 00:24:06.753 "method": "bdev_nvme_attach_controller" 00:24:06.753 },{ 00:24:06.753 "params": { 00:24:06.753 "name": "Nvme6", 00:24:06.753 "trtype": "rdma", 00:24:06.753 "traddr": "192.168.100.8", 00:24:06.753 "adrfam": "ipv4", 00:24:06.753 "trsvcid": "4420", 00:24:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:06.753 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:06.753 "hdgst": false, 00:24:06.753 "ddgst": false 00:24:06.753 }, 00:24:06.753 "method": "bdev_nvme_attach_controller" 00:24:06.753 },{ 00:24:06.753 "params": { 00:24:06.753 "name": "Nvme7", 00:24:06.753 "trtype": "rdma", 00:24:06.753 "traddr": "192.168.100.8", 00:24:06.753 "adrfam": "ipv4", 00:24:06.753 "trsvcid": "4420", 00:24:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:06.753 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:06.753 "hdgst": false, 00:24:06.753 "ddgst": false 00:24:06.753 }, 00:24:06.753 "method": "bdev_nvme_attach_controller" 00:24:06.753 },{ 00:24:06.753 "params": { 00:24:06.753 "name": "Nvme8", 00:24:06.753 "trtype": "rdma", 00:24:06.753 "traddr": "192.168.100.8", 00:24:06.753 "adrfam": "ipv4", 00:24:06.753 "trsvcid": "4420", 00:24:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:06.753 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:06.753 "hdgst": false, 00:24:06.753 "ddgst": false 00:24:06.753 }, 00:24:06.753 "method": "bdev_nvme_attach_controller" 00:24:06.753 },{ 00:24:06.753 "params": { 00:24:06.753 "name": "Nvme9", 00:24:06.753 "trtype": "rdma", 00:24:06.753 "traddr": "192.168.100.8", 00:24:06.753 "adrfam": "ipv4", 00:24:06.753 "trsvcid": "4420", 00:24:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:06.753 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:06.753 "hdgst": false, 00:24:06.753 "ddgst": false 00:24:06.753 }, 00:24:06.753 "method": "bdev_nvme_attach_controller" 00:24:06.753 },{ 00:24:06.753 "params": { 00:24:06.753 "name": "Nvme10", 00:24:06.753 "trtype": "rdma", 00:24:06.753 "traddr": "192.168.100.8", 00:24:06.753 "adrfam": "ipv4", 00:24:06.753 "trsvcid": "4420", 00:24:06.753 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:06.753 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:06.753 "hdgst": false, 00:24:06.753 "ddgst": false 00:24:06.753 }, 00:24:06.753 "method": "bdev_nvme_attach_controller" 00:24:06.753 }' 00:24:06.753 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.753 [2024-12-14 17:27:03.224342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.753 [2024-12-14 17:27:03.261050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.691 Running I/O for 10 seconds... 00:24:08.259 17:27:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.259 17:27:04 -- common/autotest_common.sh@862 -- # return 0 00:24:08.259 17:27:04 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:08.259 17:27:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.259 17:27:04 -- common/autotest_common.sh@10 -- # set +x 00:24:08.259 17:27:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.259 17:27:04 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:08.259 17:27:04 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:08.259 17:27:04 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:08.259 17:27:04 -- target/shutdown.sh@57 -- # local ret=1 00:24:08.259 17:27:04 -- target/shutdown.sh@58 -- # local i 00:24:08.259 17:27:04 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:08.259 17:27:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:08.260 17:27:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:08.260 17:27:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:08.260 17:27:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.260 17:27:04 -- common/autotest_common.sh@10 -- # set +x 00:24:08.260 17:27:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.260 17:27:04 -- target/shutdown.sh@60 -- # read_io_count=461 00:24:08.260 17:27:04 -- target/shutdown.sh@63 -- # '[' 461 -ge 100 ']' 00:24:08.260 17:27:04 -- target/shutdown.sh@64 -- # ret=0 00:24:08.260 17:27:04 -- target/shutdown.sh@65 -- # break 00:24:08.260 17:27:04 -- target/shutdown.sh@69 -- # return 0 00:24:08.260 17:27:04 -- target/shutdown.sh@109 -- # killprocess 1436231 00:24:08.260 17:27:04 -- common/autotest_common.sh@936 -- # '[' -z 1436231 ']' 00:24:08.260 17:27:04 -- common/autotest_common.sh@940 -- # kill -0 1436231 00:24:08.260 17:27:04 -- common/autotest_common.sh@941 -- # uname 00:24:08.260 17:27:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:08.260 17:27:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1436231 00:24:08.522 17:27:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:08.522 17:27:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:08.522 17:27:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1436231' 00:24:08.522 killing process with pid 1436231 00:24:08.522 17:27:04 -- common/autotest_common.sh@955 -- # kill 1436231 00:24:08.522 17:27:04 -- common/autotest_common.sh@960 -- # wait 1436231 00:24:08.522 Received shutdown signal, test time was about 0.930847 seconds 00:24:08.522 00:24:08.522 Latency(us) 00:24:08.522 [2024-12-14T16:27:05.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.522 [2024-12-14T16:27:05.206Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.522 Verification LBA range: start 0x0 length 0x400 00:24:08.522 Nvme1n1 : 0.92 711.84 44.49 0.00 0.00 88895.67 7654.60 102760.45 00:24:08.522 [2024-12-14T16:27:05.206Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.522 Verification LBA range: start 0x0 length 0x400 00:24:08.522 Nvme2n1 : 0.92 711.04 44.44 0.00 0.00 88245.18 7916.75 99405.00 00:24:08.522 [2024-12-14T16:27:05.206Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.522 Verification LBA range: start 0x0 length 0x400 00:24:08.522 Nvme3n1 : 0.92 743.89 46.49 0.00 0.00 83643.01 7025.46 93113.55 00:24:08.522 [2024-12-14T16:27:05.206Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.522 Verification LBA range: start 0x0 length 0x400 00:24:08.522 Nvme4n1 : 0.92 754.01 47.13 0.00 0.00 81897.73 8074.04 71722.60 00:24:08.522 [2024-12-14T16:27:05.207Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.523 Verification LBA range: start 0x0 length 0x400 00:24:08.523 Nvme5n1 : 0.93 747.83 46.74 0.00 0.00 82056.53 8178.89 69625.45 00:24:08.523 [2024-12-14T16:27:05.207Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.523 Verification LBA range: start 0x0 length 0x400 00:24:08.523 Nvme6n1 : 0.93 708.14 44.26 0.00 0.00 86100.40 8388.61 98146.71 00:24:08.523 [2024-12-14T16:27:05.207Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.523 Verification LBA range: start 0x0 length 0x400 00:24:08.523 Nvme7n1 : 0.93 746.23 46.64 0.00 0.00 81034.47 8545.89 70883.74 00:24:08.523 [2024-12-14T16:27:05.207Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.523 Verification LBA range: start 0x0 length 0x400 00:24:08.523 Nvme8n1 : 0.93 745.47 46.59 0.00 0.00 80553.14 8650.75 69625.45 00:24:08.523 [2024-12-14T16:27:05.207Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.523 Verification LBA range: start 0x0 length 0x400 00:24:08.523 Nvme9n1 : 0.93 656.50 41.03 0.00 0.00 90745.74 8755.61 146800.64 00:24:08.523 [2024-12-14T16:27:05.207Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:08.523 Verification LBA range: start 0x0 length 0x400 00:24:08.523 Nvme10n1 : 0.93 655.86 40.99 0.00 0.00 90054.29 7864.32 145122.92 00:24:08.523 [2024-12-14T16:27:05.207Z] =================================================================================================================== 00:24:08.523 [2024-12-14T16:27:05.207Z] Total : 7180.81 448.80 0.00 0.00 85156.66 7025.46 146800.64 00:24:08.784 17:27:05 -- target/shutdown.sh@112 -- # sleep 1 00:24:09.720 17:27:06 -- target/shutdown.sh@113 -- # kill -0 1435894 00:24:09.720 17:27:06 -- target/shutdown.sh@115 -- # stoptarget 00:24:09.720 17:27:06 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:09.720 17:27:06 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:09.720 17:27:06 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:09.720 17:27:06 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:09.720 17:27:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:09.720 17:27:06 -- nvmf/common.sh@116 -- # sync 00:24:09.720 17:27:06 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:09.720 17:27:06 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:09.720 17:27:06 -- nvmf/common.sh@119 -- # set +e 00:24:09.720 17:27:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:09.720 17:27:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:09.720 rmmod nvme_rdma 00:24:09.720 rmmod nvme_fabrics 00:24:09.720 17:27:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:09.720 17:27:06 -- nvmf/common.sh@123 -- # set -e 00:24:09.720 17:27:06 -- nvmf/common.sh@124 -- # return 0 00:24:09.720 17:27:06 -- nvmf/common.sh@477 -- # '[' -n 1435894 ']' 00:24:09.720 17:27:06 -- nvmf/common.sh@478 -- # killprocess 1435894 00:24:09.720 17:27:06 -- common/autotest_common.sh@936 -- # '[' -z 1435894 ']' 00:24:09.720 17:27:06 -- common/autotest_common.sh@940 -- # kill -0 1435894 00:24:09.720 17:27:06 -- common/autotest_common.sh@941 -- # uname 00:24:09.720 17:27:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:09.720 17:27:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1435894 00:24:09.980 17:27:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:09.980 17:27:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:09.980 17:27:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1435894' 00:24:09.980 killing process with pid 1435894 00:24:09.980 17:27:06 -- common/autotest_common.sh@955 -- # kill 1435894 00:24:09.980 17:27:06 -- common/autotest_common.sh@960 -- # wait 1435894 00:24:10.239 17:27:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:10.239 17:27:06 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:10.239 00:24:10.239 real 0m5.670s 00:24:10.239 user 0m23.082s 00:24:10.239 sys 0m1.240s 00:24:10.239 17:27:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:10.239 17:27:06 -- common/autotest_common.sh@10 -- # set +x 00:24:10.239 ************************************ 00:24:10.239 END TEST nvmf_shutdown_tc2 00:24:10.239 ************************************ 00:24:10.500 17:27:06 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:10.500 17:27:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:10.500 17:27:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:10.500 17:27:06 -- common/autotest_common.sh@10 -- # set +x 00:24:10.500 ************************************ 00:24:10.500 START TEST nvmf_shutdown_tc3 00:24:10.500 ************************************ 00:24:10.500 17:27:06 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc3 00:24:10.500 17:27:06 -- target/shutdown.sh@120 -- # starttarget 00:24:10.500 17:27:06 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:10.500 17:27:06 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:10.500 17:27:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.500 17:27:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:10.500 17:27:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:10.500 17:27:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:10.500 17:27:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.500 17:27:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.500 17:27:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.500 17:27:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:10.500 17:27:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:10.500 17:27:06 -- common/autotest_common.sh@10 -- # set +x 00:24:10.500 17:27:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:10.500 17:27:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:10.500 17:27:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:10.500 17:27:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:10.500 17:27:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:10.500 17:27:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:10.500 17:27:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:10.500 17:27:06 -- nvmf/common.sh@294 -- # net_devs=() 00:24:10.500 17:27:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:10.500 17:27:06 -- nvmf/common.sh@295 -- # e810=() 00:24:10.500 17:27:06 -- nvmf/common.sh@295 -- # local -ga e810 00:24:10.500 17:27:06 -- nvmf/common.sh@296 -- # x722=() 00:24:10.500 17:27:06 -- nvmf/common.sh@296 -- # local -ga x722 00:24:10.500 17:27:06 -- nvmf/common.sh@297 -- # mlx=() 00:24:10.500 17:27:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:10.500 17:27:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.500 17:27:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.500 17:27:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.500 17:27:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.500 17:27:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.500 17:27:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.500 17:27:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.500 17:27:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.500 17:27:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.500 17:27:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.500 17:27:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.500 17:27:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:10.500 17:27:06 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:10.500 17:27:06 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:10.500 17:27:06 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:10.500 17:27:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:10.500 17:27:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:10.500 17:27:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:10.500 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:10.500 17:27:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:10.500 17:27:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:10.500 17:27:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:10.500 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:10.500 17:27:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:10.500 17:27:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:10.500 17:27:06 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:10.500 17:27:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.500 17:27:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:10.500 17:27:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.500 17:27:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:10.500 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:10.500 17:27:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.500 17:27:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:10.500 17:27:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.500 17:27:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:10.500 17:27:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.500 17:27:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:10.500 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:10.500 17:27:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.500 17:27:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:10.500 17:27:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:10.500 17:27:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:10.500 17:27:06 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:10.500 17:27:06 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:10.500 17:27:06 -- nvmf/common.sh@57 -- # uname 00:24:10.500 17:27:06 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:10.500 17:27:06 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:10.500 17:27:06 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:10.500 17:27:06 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:10.500 17:27:07 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:10.500 17:27:07 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:10.500 17:27:07 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:10.500 17:27:07 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:10.500 17:27:07 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:10.500 17:27:07 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:10.500 17:27:07 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:10.500 17:27:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:10.500 17:27:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:10.500 17:27:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:10.500 17:27:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:10.500 17:27:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:10.500 17:27:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:10.500 17:27:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.500 17:27:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:10.500 17:27:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:10.500 17:27:07 -- nvmf/common.sh@104 -- # continue 2 00:24:10.500 17:27:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:10.500 17:27:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.500 17:27:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:10.500 17:27:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.500 17:27:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:10.500 17:27:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:10.500 17:27:07 -- nvmf/common.sh@104 -- # continue 2 00:24:10.500 17:27:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:10.500 17:27:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:10.500 17:27:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:10.500 17:27:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:10.500 17:27:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:10.500 17:27:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:10.500 17:27:07 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:10.500 17:27:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:10.500 17:27:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:10.500 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:10.500 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:10.500 altname enp217s0f0np0 00:24:10.500 altname ens818f0np0 00:24:10.500 inet 192.168.100.8/24 scope global mlx_0_0 00:24:10.500 valid_lft forever preferred_lft forever 00:24:10.500 17:27:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:10.500 17:27:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:10.500 17:27:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:10.500 17:27:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:10.500 17:27:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:10.500 17:27:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:10.500 17:27:07 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:10.500 17:27:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:10.500 17:27:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:10.500 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:10.500 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:10.500 altname enp217s0f1np1 00:24:10.501 altname ens818f1np1 00:24:10.501 inet 192.168.100.9/24 scope global mlx_0_1 00:24:10.501 valid_lft forever preferred_lft forever 00:24:10.501 17:27:07 -- nvmf/common.sh@410 -- # return 0 00:24:10.501 17:27:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:10.501 17:27:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:10.501 17:27:07 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:10.501 17:27:07 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:10.501 17:27:07 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:10.501 17:27:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:10.501 17:27:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:10.501 17:27:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:10.501 17:27:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:10.501 17:27:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:10.501 17:27:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:10.501 17:27:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.501 17:27:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:10.501 17:27:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:10.501 17:27:07 -- nvmf/common.sh@104 -- # continue 2 00:24:10.501 17:27:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:10.501 17:27:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.501 17:27:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:10.501 17:27:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.501 17:27:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:10.501 17:27:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:10.501 17:27:07 -- nvmf/common.sh@104 -- # continue 2 00:24:10.501 17:27:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:10.501 17:27:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:10.501 17:27:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:10.501 17:27:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:10.501 17:27:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:10.501 17:27:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:10.501 17:27:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:10.501 17:27:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:10.501 17:27:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:10.501 17:27:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:10.501 17:27:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:10.501 17:27:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:10.501 17:27:07 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:10.501 192.168.100.9' 00:24:10.501 17:27:07 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:10.501 192.168.100.9' 00:24:10.501 17:27:07 -- nvmf/common.sh@445 -- # head -n 1 00:24:10.761 17:27:07 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:10.761 17:27:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:10.761 192.168.100.9' 00:24:10.761 17:27:07 -- nvmf/common.sh@446 -- # tail -n +2 00:24:10.761 17:27:07 -- nvmf/common.sh@446 -- # head -n 1 00:24:10.761 17:27:07 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:10.761 17:27:07 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:10.761 17:27:07 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:10.761 17:27:07 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:10.761 17:27:07 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:10.761 17:27:07 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:10.761 17:27:07 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:10.761 17:27:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:10.761 17:27:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:10.761 17:27:07 -- common/autotest_common.sh@10 -- # set +x 00:24:10.761 17:27:07 -- nvmf/common.sh@469 -- # nvmfpid=1437372 00:24:10.761 17:27:07 -- nvmf/common.sh@470 -- # waitforlisten 1437372 00:24:10.761 17:27:07 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:10.761 17:27:07 -- common/autotest_common.sh@829 -- # '[' -z 1437372 ']' 00:24:10.761 17:27:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.761 17:27:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.761 17:27:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.761 17:27:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.761 17:27:07 -- common/autotest_common.sh@10 -- # set +x 00:24:10.761 [2024-12-14 17:27:07.274673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:10.761 [2024-12-14 17:27:07.274725] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.761 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.761 [2024-12-14 17:27:07.345755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.761 [2024-12-14 17:27:07.382290] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:10.761 [2024-12-14 17:27:07.382425] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.761 [2024-12-14 17:27:07.382435] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.761 [2024-12-14 17:27:07.382444] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.761 [2024-12-14 17:27:07.382566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.761 [2024-12-14 17:27:07.382654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.761 [2024-12-14 17:27:07.382763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.761 [2024-12-14 17:27:07.382765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:11.698 17:27:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:11.698 17:27:08 -- common/autotest_common.sh@862 -- # return 0 00:24:11.698 17:27:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:11.698 17:27:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:11.698 17:27:08 -- common/autotest_common.sh@10 -- # set +x 00:24:11.698 17:27:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.698 17:27:08 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:11.698 17:27:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.698 17:27:08 -- common/autotest_common.sh@10 -- # set +x 00:24:11.698 [2024-12-14 17:27:08.168010] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7063c0/0x70a890) succeed. 00:24:11.698 [2024-12-14 17:27:08.177480] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x707960/0x74bf30) succeed. 00:24:11.698 17:27:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.698 17:27:08 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:11.698 17:27:08 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:11.698 17:27:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:11.698 17:27:08 -- common/autotest_common.sh@10 -- # set +x 00:24:11.698 17:27:08 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:11.698 17:27:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.698 17:27:08 -- target/shutdown.sh@28 -- # cat 00:24:11.698 17:27:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.698 17:27:08 -- target/shutdown.sh@28 -- # cat 00:24:11.698 17:27:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.698 17:27:08 -- target/shutdown.sh@28 -- # cat 00:24:11.698 17:27:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.698 17:27:08 -- target/shutdown.sh@28 -- # cat 00:24:11.698 17:27:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.698 17:27:08 -- target/shutdown.sh@28 -- # cat 00:24:11.698 17:27:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.698 17:27:08 -- target/shutdown.sh@28 -- # cat 00:24:11.698 17:27:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.698 17:27:08 -- target/shutdown.sh@28 -- # cat 00:24:11.698 17:27:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.698 17:27:08 -- target/shutdown.sh@28 -- # cat 00:24:11.698 17:27:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.698 17:27:08 -- target/shutdown.sh@28 -- # cat 00:24:11.698 17:27:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.698 17:27:08 -- target/shutdown.sh@28 -- # cat 00:24:11.698 17:27:08 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:11.698 17:27:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.698 17:27:08 -- common/autotest_common.sh@10 -- # set +x 00:24:11.698 Malloc1 00:24:11.957 [2024-12-14 17:27:08.399214] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:11.957 Malloc2 00:24:11.957 Malloc3 00:24:11.957 Malloc4 00:24:11.957 Malloc5 00:24:11.957 Malloc6 00:24:12.217 Malloc7 00:24:12.217 Malloc8 00:24:12.217 Malloc9 00:24:12.217 Malloc10 00:24:12.217 17:27:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.217 17:27:08 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:12.217 17:27:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.217 17:27:08 -- common/autotest_common.sh@10 -- # set +x 00:24:12.217 17:27:08 -- target/shutdown.sh@124 -- # perfpid=1437704 00:24:12.217 17:27:08 -- target/shutdown.sh@125 -- # waitforlisten 1437704 /var/tmp/bdevperf.sock 00:24:12.217 17:27:08 -- common/autotest_common.sh@829 -- # '[' -z 1437704 ']' 00:24:12.217 17:27:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.217 17:27:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.217 17:27:08 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:12.217 17:27:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.217 17:27:08 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:12.217 17:27:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.217 17:27:08 -- nvmf/common.sh@520 -- # config=() 00:24:12.217 17:27:08 -- common/autotest_common.sh@10 -- # set +x 00:24:12.217 17:27:08 -- nvmf/common.sh@520 -- # local subsystem config 00:24:12.217 17:27:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.217 17:27:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.217 { 00:24:12.217 "params": { 00:24:12.217 "name": "Nvme$subsystem", 00:24:12.217 "trtype": "$TEST_TRANSPORT", 00:24:12.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.217 "adrfam": "ipv4", 00:24:12.217 "trsvcid": "$NVMF_PORT", 00:24:12.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.217 "hdgst": ${hdgst:-false}, 00:24:12.217 "ddgst": ${ddgst:-false} 00:24:12.217 }, 00:24:12.217 "method": "bdev_nvme_attach_controller" 00:24:12.217 } 00:24:12.217 EOF 00:24:12.217 )") 00:24:12.217 17:27:08 -- nvmf/common.sh@542 -- # cat 00:24:12.217 17:27:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.217 17:27:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.217 { 00:24:12.217 "params": { 00:24:12.217 "name": "Nvme$subsystem", 00:24:12.217 "trtype": "$TEST_TRANSPORT", 00:24:12.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.217 "adrfam": "ipv4", 00:24:12.217 "trsvcid": "$NVMF_PORT", 00:24:12.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.217 "hdgst": ${hdgst:-false}, 00:24:12.217 "ddgst": ${ddgst:-false} 00:24:12.217 }, 00:24:12.217 "method": "bdev_nvme_attach_controller" 00:24:12.217 } 00:24:12.217 EOF 00:24:12.217 )") 00:24:12.217 17:27:08 -- nvmf/common.sh@542 -- # cat 00:24:12.217 17:27:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.217 17:27:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.217 { 00:24:12.217 "params": { 00:24:12.217 "name": "Nvme$subsystem", 00:24:12.217 "trtype": "$TEST_TRANSPORT", 00:24:12.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.217 "adrfam": "ipv4", 00:24:12.217 "trsvcid": "$NVMF_PORT", 00:24:12.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.217 "hdgst": ${hdgst:-false}, 00:24:12.217 "ddgst": ${ddgst:-false} 00:24:12.217 }, 00:24:12.217 "method": "bdev_nvme_attach_controller" 00:24:12.217 } 00:24:12.217 EOF 00:24:12.217 )") 00:24:12.217 17:27:08 -- nvmf/common.sh@542 -- # cat 00:24:12.217 17:27:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.217 17:27:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.217 { 00:24:12.217 "params": { 00:24:12.218 "name": "Nvme$subsystem", 00:24:12.218 "trtype": "$TEST_TRANSPORT", 00:24:12.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.218 "adrfam": "ipv4", 00:24:12.218 "trsvcid": "$NVMF_PORT", 00:24:12.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.218 "hdgst": ${hdgst:-false}, 00:24:12.218 "ddgst": ${ddgst:-false} 00:24:12.218 }, 00:24:12.218 "method": "bdev_nvme_attach_controller" 00:24:12.218 } 00:24:12.218 EOF 00:24:12.218 )") 00:24:12.218 17:27:08 -- nvmf/common.sh@542 -- # cat 00:24:12.218 17:27:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.218 17:27:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.218 { 00:24:12.218 "params": { 00:24:12.218 "name": "Nvme$subsystem", 00:24:12.218 "trtype": "$TEST_TRANSPORT", 00:24:12.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.218 "adrfam": "ipv4", 00:24:12.218 "trsvcid": "$NVMF_PORT", 00:24:12.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.218 "hdgst": ${hdgst:-false}, 00:24:12.218 "ddgst": ${ddgst:-false} 00:24:12.218 }, 00:24:12.218 "method": "bdev_nvme_attach_controller" 00:24:12.218 } 00:24:12.218 EOF 00:24:12.218 )") 00:24:12.218 17:27:08 -- nvmf/common.sh@542 -- # cat 00:24:12.218 [2024-12-14 17:27:08.889890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:12.218 17:27:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.218 [2024-12-14 17:27:08.889945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437704 ] 00:24:12.218 17:27:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.218 { 00:24:12.218 "params": { 00:24:12.218 "name": "Nvme$subsystem", 00:24:12.218 "trtype": "$TEST_TRANSPORT", 00:24:12.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.218 "adrfam": "ipv4", 00:24:12.218 "trsvcid": "$NVMF_PORT", 00:24:12.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.218 "hdgst": ${hdgst:-false}, 00:24:12.218 "ddgst": ${ddgst:-false} 00:24:12.218 }, 00:24:12.218 "method": "bdev_nvme_attach_controller" 00:24:12.218 } 00:24:12.218 EOF 00:24:12.218 )") 00:24:12.218 17:27:08 -- nvmf/common.sh@542 -- # cat 00:24:12.218 17:27:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.218 17:27:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.218 { 00:24:12.218 "params": { 00:24:12.218 "name": "Nvme$subsystem", 00:24:12.218 "trtype": "$TEST_TRANSPORT", 00:24:12.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.218 "adrfam": "ipv4", 00:24:12.218 "trsvcid": "$NVMF_PORT", 00:24:12.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.218 "hdgst": ${hdgst:-false}, 00:24:12.218 "ddgst": ${ddgst:-false} 00:24:12.218 }, 00:24:12.218 "method": "bdev_nvme_attach_controller" 00:24:12.218 } 00:24:12.218 EOF 00:24:12.218 )") 00:24:12.218 17:27:08 -- nvmf/common.sh@542 -- # cat 00:24:12.477 17:27:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.477 17:27:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.477 { 00:24:12.477 "params": { 00:24:12.477 "name": "Nvme$subsystem", 00:24:12.477 "trtype": "$TEST_TRANSPORT", 00:24:12.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.477 "adrfam": "ipv4", 00:24:12.477 "trsvcid": "$NVMF_PORT", 00:24:12.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.477 "hdgst": ${hdgst:-false}, 00:24:12.477 "ddgst": ${ddgst:-false} 00:24:12.477 }, 00:24:12.477 "method": "bdev_nvme_attach_controller" 00:24:12.477 } 00:24:12.477 EOF 00:24:12.477 )") 00:24:12.477 17:27:08 -- nvmf/common.sh@542 -- # cat 00:24:12.477 17:27:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.477 17:27:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.477 { 00:24:12.477 "params": { 00:24:12.477 "name": "Nvme$subsystem", 00:24:12.477 "trtype": "$TEST_TRANSPORT", 00:24:12.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.477 "adrfam": "ipv4", 00:24:12.477 "trsvcid": "$NVMF_PORT", 00:24:12.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.477 "hdgst": ${hdgst:-false}, 00:24:12.477 "ddgst": ${ddgst:-false} 00:24:12.477 }, 00:24:12.477 "method": "bdev_nvme_attach_controller" 00:24:12.477 } 00:24:12.477 EOF 00:24:12.478 )") 00:24:12.478 17:27:08 -- nvmf/common.sh@542 -- # cat 00:24:12.478 17:27:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.478 17:27:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.478 { 00:24:12.478 "params": { 00:24:12.478 "name": "Nvme$subsystem", 00:24:12.478 "trtype": "$TEST_TRANSPORT", 00:24:12.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.478 "adrfam": "ipv4", 00:24:12.478 "trsvcid": "$NVMF_PORT", 00:24:12.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.478 "hdgst": ${hdgst:-false}, 00:24:12.478 "ddgst": ${ddgst:-false} 00:24:12.478 }, 00:24:12.478 "method": "bdev_nvme_attach_controller" 00:24:12.478 } 00:24:12.478 EOF 00:24:12.478 )") 00:24:12.478 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.478 17:27:08 -- nvmf/common.sh@542 -- # cat 00:24:12.478 17:27:08 -- nvmf/common.sh@544 -- # jq . 00:24:12.478 17:27:08 -- nvmf/common.sh@545 -- # IFS=, 00:24:12.478 17:27:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:12.478 "params": { 00:24:12.478 "name": "Nvme1", 00:24:12.478 "trtype": "rdma", 00:24:12.478 "traddr": "192.168.100.8", 00:24:12.478 "adrfam": "ipv4", 00:24:12.478 "trsvcid": "4420", 00:24:12.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.478 "hdgst": false, 00:24:12.478 "ddgst": false 00:24:12.478 }, 00:24:12.478 "method": "bdev_nvme_attach_controller" 00:24:12.478 },{ 00:24:12.478 "params": { 00:24:12.478 "name": "Nvme2", 00:24:12.478 "trtype": "rdma", 00:24:12.478 "traddr": "192.168.100.8", 00:24:12.478 "adrfam": "ipv4", 00:24:12.478 "trsvcid": "4420", 00:24:12.478 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:12.478 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:12.478 "hdgst": false, 00:24:12.478 "ddgst": false 00:24:12.478 }, 00:24:12.478 "method": "bdev_nvme_attach_controller" 00:24:12.478 },{ 00:24:12.478 "params": { 00:24:12.478 "name": "Nvme3", 00:24:12.478 "trtype": "rdma", 00:24:12.478 "traddr": "192.168.100.8", 00:24:12.478 "adrfam": "ipv4", 00:24:12.478 "trsvcid": "4420", 00:24:12.478 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:12.478 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:12.478 "hdgst": false, 00:24:12.478 "ddgst": false 00:24:12.478 }, 00:24:12.478 "method": "bdev_nvme_attach_controller" 00:24:12.478 },{ 00:24:12.478 "params": { 00:24:12.478 "name": "Nvme4", 00:24:12.478 "trtype": "rdma", 00:24:12.478 "traddr": "192.168.100.8", 00:24:12.478 "adrfam": "ipv4", 00:24:12.478 "trsvcid": "4420", 00:24:12.478 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:12.478 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:12.478 "hdgst": false, 00:24:12.478 "ddgst": false 00:24:12.478 }, 00:24:12.478 "method": "bdev_nvme_attach_controller" 00:24:12.478 },{ 00:24:12.478 "params": { 00:24:12.478 "name": "Nvme5", 00:24:12.478 "trtype": "rdma", 00:24:12.478 "traddr": "192.168.100.8", 00:24:12.478 "adrfam": "ipv4", 00:24:12.478 "trsvcid": "4420", 00:24:12.478 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:12.478 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:12.478 "hdgst": false, 00:24:12.478 "ddgst": false 00:24:12.478 }, 00:24:12.478 "method": "bdev_nvme_attach_controller" 00:24:12.478 },{ 00:24:12.478 "params": { 00:24:12.478 "name": "Nvme6", 00:24:12.478 "trtype": "rdma", 00:24:12.478 "traddr": "192.168.100.8", 00:24:12.478 "adrfam": "ipv4", 00:24:12.478 "trsvcid": "4420", 00:24:12.478 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:12.478 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:12.478 "hdgst": false, 00:24:12.478 "ddgst": false 00:24:12.478 }, 00:24:12.478 "method": "bdev_nvme_attach_controller" 00:24:12.478 },{ 00:24:12.478 "params": { 00:24:12.478 "name": "Nvme7", 00:24:12.478 "trtype": "rdma", 00:24:12.478 "traddr": "192.168.100.8", 00:24:12.478 "adrfam": "ipv4", 00:24:12.478 "trsvcid": "4420", 00:24:12.478 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:12.478 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:12.478 "hdgst": false, 00:24:12.478 "ddgst": false 00:24:12.478 }, 00:24:12.478 "method": "bdev_nvme_attach_controller" 00:24:12.478 },{ 00:24:12.478 "params": { 00:24:12.478 "name": "Nvme8", 00:24:12.478 "trtype": "rdma", 00:24:12.478 "traddr": "192.168.100.8", 00:24:12.478 "adrfam": "ipv4", 00:24:12.478 "trsvcid": "4420", 00:24:12.478 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:12.478 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:12.478 "hdgst": false, 00:24:12.478 "ddgst": false 00:24:12.478 }, 00:24:12.478 "method": "bdev_nvme_attach_controller" 00:24:12.478 },{ 00:24:12.478 "params": { 00:24:12.478 "name": "Nvme9", 00:24:12.478 "trtype": "rdma", 00:24:12.478 "traddr": "192.168.100.8", 00:24:12.478 "adrfam": "ipv4", 00:24:12.478 "trsvcid": "4420", 00:24:12.478 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:12.478 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:12.478 "hdgst": false, 00:24:12.478 "ddgst": false 00:24:12.478 }, 00:24:12.478 "method": "bdev_nvme_attach_controller" 00:24:12.478 },{ 00:24:12.478 "params": { 00:24:12.478 "name": "Nvme10", 00:24:12.478 "trtype": "rdma", 00:24:12.478 "traddr": "192.168.100.8", 00:24:12.478 "adrfam": "ipv4", 00:24:12.478 "trsvcid": "4420", 00:24:12.478 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:12.478 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:12.478 "hdgst": false, 00:24:12.478 "ddgst": false 00:24:12.478 }, 00:24:12.478 "method": "bdev_nvme_attach_controller" 00:24:12.478 }' 00:24:12.478 [2024-12-14 17:27:08.964784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.478 [2024-12-14 17:27:09.000837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.416 Running I/O for 10 seconds... 00:24:13.985 17:27:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.985 17:27:10 -- common/autotest_common.sh@862 -- # return 0 00:24:13.985 17:27:10 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:13.985 17:27:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.985 17:27:10 -- common/autotest_common.sh@10 -- # set +x 00:24:13.985 17:27:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.985 17:27:10 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:13.985 17:27:10 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:13.985 17:27:10 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:13.985 17:27:10 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:13.985 17:27:10 -- target/shutdown.sh@57 -- # local ret=1 00:24:13.985 17:27:10 -- target/shutdown.sh@58 -- # local i 00:24:13.985 17:27:10 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:13.985 17:27:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:13.985 17:27:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:13.985 17:27:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:13.985 17:27:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.985 17:27:10 -- common/autotest_common.sh@10 -- # set +x 00:24:13.985 17:27:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.985 17:27:10 -- target/shutdown.sh@60 -- # read_io_count=491 00:24:13.985 17:27:10 -- target/shutdown.sh@63 -- # '[' 491 -ge 100 ']' 00:24:13.985 17:27:10 -- target/shutdown.sh@64 -- # ret=0 00:24:13.985 17:27:10 -- target/shutdown.sh@65 -- # break 00:24:13.985 17:27:10 -- target/shutdown.sh@69 -- # return 0 00:24:13.985 17:27:10 -- target/shutdown.sh@134 -- # killprocess 1437372 00:24:13.985 17:27:10 -- common/autotest_common.sh@936 -- # '[' -z 1437372 ']' 00:24:13.985 17:27:10 -- common/autotest_common.sh@940 -- # kill -0 1437372 00:24:13.985 17:27:10 -- common/autotest_common.sh@941 -- # uname 00:24:14.244 17:27:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:14.244 17:27:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1437372 00:24:14.244 17:27:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:14.244 17:27:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:14.244 17:27:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1437372' 00:24:14.244 killing process with pid 1437372 00:24:14.244 17:27:10 -- common/autotest_common.sh@955 -- # kill 1437372 00:24:14.244 17:27:10 -- common/autotest_common.sh@960 -- # wait 1437372 00:24:14.244 [2024-12-14 17:27:10.744490] rdma.c: 918:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 1 00:24:14.814 17:27:11 -- target/shutdown.sh@135 -- # nvmfpid= 00:24:14.814 17:27:11 -- target/shutdown.sh@138 -- # sleep 1 00:24:15.390 [2024-12-14 17:27:11.790580] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257100 was disconnected and freed. reset controller. 00:24:15.390 [2024-12-14 17:27:11.792954] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256ec0 was disconnected and freed. reset controller. 00:24:15.390 [2024-12-14 17:27:11.793023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000042f000 len:0x10000 key:0x184200 00:24:15.390 [2024-12-14 17:27:11.793061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:6b26 p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.793126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ef600 len:0x10000 key:0x184200 00:24:15.390 [2024-12-14 17:27:11.793159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:6b26 p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.793208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000047f280 len:0x10000 key:0x184200 00:24:15.390 [2024-12-14 17:27:11.793248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:6b26 p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.793296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008df100 len:0x10000 key:0x183700 00:24:15.390 [2024-12-14 17:27:11.793328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:6b26 p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.793376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083ec00 len:0x10000 key:0x183700 00:24:15.390 [2024-12-14 17:27:11.793407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:6b26 p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.793455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195cff00 len:0x10000 key:0x182a00 00:24:15.390 [2024-12-14 17:27:11.793489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:6b26 p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.795421] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256c80 was disconnected and freed. reset controller. 00:24:15.390 [2024-12-14 17:27:11.795478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992fa00 len:0x10000 key:0x182c00 00:24:15.390 [2024-12-14 17:27:11.795523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:0ce8 p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.795543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x182d00 00:24:15.390 [2024-12-14 17:27:11.795553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:0ce8 p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.795567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x182c00 00:24:15.390 [2024-12-14 17:27:11.795576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:0ce8 p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798058] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a40 was disconnected and freed. reset controller. 00:24:15.390 [2024-12-14 17:27:11.798115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182e00 00:24:15.390 [2024-12-14 17:27:11.798148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182e00 00:24:15.390 [2024-12-14 17:27:11.798224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182f00 00:24:15.390 [2024-12-14 17:27:11.798293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182e00 00:24:15.390 [2024-12-14 17:27:11.798370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a9fb80 len:0x10000 key:0x182d00 00:24:15.390 [2024-12-14 17:27:11.798441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x182f00 00:24:15.390 [2024-12-14 17:27:11.798535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a0f700 len:0x10000 key:0x182d00 00:24:15.390 [2024-12-14 17:27:11.798564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x182f00 00:24:15.390 [2024-12-14 17:27:11.798592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x182f00 00:24:15.390 [2024-12-14 17:27:11.798620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182e00 00:24:15.390 [2024-12-14 17:27:11.798648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x182f00 00:24:15.390 [2024-12-14 17:27:11.798676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcff00 len:0x10000 key:0x182e00 00:24:15.390 [2024-12-14 17:27:11.798704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a6fa00 len:0x10000 key:0x182d00 00:24:15.390 [2024-12-14 17:27:11.798732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182f00 00:24:15.390 [2024-12-14 17:27:11.798760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x182f00 00:24:15.390 [2024-12-14 17:27:11.798792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x182f00 00:24:15.390 [2024-12-14 17:27:11.798821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x182f00 00:24:15.390 [2024-12-14 17:27:11.798850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a1f780 len:0x10000 key:0x182d00 00:24:15.390 [2024-12-14 17:27:11.798878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.390 [2024-12-14 17:27:11.798894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafe00 len:0x10000 key:0x182e00 00:24:15.391 [2024-12-14 17:27:11.798907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.798923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x182f00 00:24:15.391 [2024-12-14 17:27:11.798935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.798951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dbfe80 len:0x10000 key:0x182e00 00:24:15.391 [2024-12-14 17:27:11.798964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.798980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x182e00 00:24:15.391 [2024-12-14 17:27:11.798993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001337d000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001335c000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001331a000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013485000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013464000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013443000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013422000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013401000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133e0000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec97000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec76000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec55000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d55000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d34000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c62d000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60c000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5eb000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ca000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5a9000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c588000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.799784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c567000 len:0x10000 key:0x184300 00:24:15.391 [2024-12-14 17:27:11.799796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:60bc p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.801734] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256800 was disconnected and freed. reset controller. 00:24:15.391 [2024-12-14 17:27:11.801758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182f00 00:24:15.391 [2024-12-14 17:27:11.801771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.801789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x183100 00:24:15.391 [2024-12-14 17:27:11.801802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.801817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183100 00:24:15.391 [2024-12-14 17:27:11.801833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.801848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x183100 00:24:15.391 [2024-12-14 17:27:11.801860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.391 [2024-12-14 17:27:11.801875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x183f00 00:24:15.392 [2024-12-14 17:27:11.801888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.801902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x183000 00:24:15.392 [2024-12-14 17:27:11.801915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.801929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0dfd80 len:0x10000 key:0x183000 00:24:15.392 [2024-12-14 17:27:11.801942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.801956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x183f00 00:24:15.392 [2024-12-14 17:27:11.801969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.801984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.801999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x183000 00:24:15.392 [2024-12-14 17:27:11.802026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x183f00 00:24:15.392 [2024-12-14 17:27:11.802053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x183f00 00:24:15.392 [2024-12-14 17:27:11.802080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x182f00 00:24:15.392 [2024-12-14 17:27:11.802107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.802134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.802161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.802188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.802214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x183000 00:24:15.392 [2024-12-14 17:27:11.802241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.802270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x183000 00:24:15.392 [2024-12-14 17:27:11.802297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x183000 00:24:15.392 [2024-12-14 17:27:11.802326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.802353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x183f00 00:24:15.392 [2024-12-14 17:27:11.802379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182f00 00:24:15.392 [2024-12-14 17:27:11.802406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.802434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182f00 00:24:15.392 [2024-12-14 17:27:11.802463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x183f00 00:24:15.392 [2024-12-14 17:27:11.802490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182f00 00:24:15.392 [2024-12-14 17:27:11.802523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.802549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.802576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.802603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x183000 00:24:15.392 [2024-12-14 17:27:11.802632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182f00 00:24:15.392 [2024-12-14 17:27:11.802659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.802686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x183000 00:24:15.392 [2024-12-14 17:27:11.802714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3f0000 len:0x10000 key:0x183100 00:24:15.392 [2024-12-14 17:27:11.802741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x184300 00:24:15.392 [2024-12-14 17:27:11.802768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x184300 00:24:15.392 [2024-12-14 17:27:11.802795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135cf000 len:0x10000 key:0x184300 00:24:15.392 [2024-12-14 17:27:11.802822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135ae000 len:0x10000 key:0x184300 00:24:15.392 [2024-12-14 17:27:11.802849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.392 [2024-12-14 17:27:11.802863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001358d000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.802875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.802890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001356c000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.802902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.802918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001354b000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.802930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.802945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001352a000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.802957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.802972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013695000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.802984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.802999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013674000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013653000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013632000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013611000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135f0000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c66f000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c64e000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011db4000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d93000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d72000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be71000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be50000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2cf000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2ae000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d28d000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.803485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d26c000 len:0x10000 key:0x184300 00:24:15.393 [2024-12-14 17:27:11.803533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:252e p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.805635] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192565c0 was disconnected and freed. reset controller. 00:24:15.393 [2024-12-14 17:27:11.805664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x184000 00:24:15.393 [2024-12-14 17:27:11.805678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.805695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183500 00:24:15.393 [2024-12-14 17:27:11.805707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.805722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183500 00:24:15.393 [2024-12-14 17:27:11.805735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.805750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x184000 00:24:15.393 [2024-12-14 17:27:11.805763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.805777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183500 00:24:15.393 [2024-12-14 17:27:11.805790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.805806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x184000 00:24:15.393 [2024-12-14 17:27:11.805818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.805833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a44f900 len:0x10000 key:0x183f00 00:24:15.393 [2024-12-14 17:27:11.805845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.805860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183500 00:24:15.393 [2024-12-14 17:27:11.805872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.393 [2024-12-14 17:27:11.805886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183500 00:24:15.393 [2024-12-14 17:27:11.805899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.805913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7afe00 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.805926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.805940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183500 00:24:15.394 [2024-12-14 17:27:11.805952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.805969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183500 00:24:15.394 [2024-12-14 17:27:11.805982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.805996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183500 00:24:15.394 [2024-12-14 17:27:11.806118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a40f700 len:0x10000 key:0x183f00 00:24:15.394 [2024-12-14 17:27:11.806145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x183f00 00:24:15.394 [2024-12-14 17:27:11.806226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183500 00:24:15.394 [2024-12-14 17:27:11.806252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183500 00:24:15.394 [2024-12-14 17:27:11.806281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183500 00:24:15.394 [2024-12-14 17:27:11.806336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183500 00:24:15.394 [2024-12-14 17:27:11.806390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183500 00:24:15.394 [2024-12-14 17:27:11.806471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183500 00:24:15.394 [2024-12-14 17:27:11.806503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7bfe80 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183500 00:24:15.394 [2024-12-14 17:27:11.806591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a76fc00 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x184000 00:24:15.394 [2024-12-14 17:27:11.806644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x184300 00:24:15.394 [2024-12-14 17:27:11.806671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x184300 00:24:15.394 [2024-12-14 17:27:11.806699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001375b000 len:0x10000 key:0x184300 00:24:15.394 [2024-12-14 17:27:11.806726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.394 [2024-12-14 17:27:11.806740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001373a000 len:0x10000 key:0x184300 00:24:15.394 [2024-12-14 17:27:11.806753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.806767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4a5000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.806779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.806794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b484000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.806806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.806821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b463000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.806833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.806847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b442000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.806860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.806874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b421000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.806888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.806902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b400000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.806915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.806930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c87f000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.806942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.806956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c85e000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.806969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.806984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c83d000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.806996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c81c000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7fb000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7da000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011931000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011952000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f5d000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f3c000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0e4000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0c3000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0a2000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c081000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c060000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.807390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d49d000 len:0x10000 key:0x184300 00:24:15.395 [2024-12-14 17:27:11.807402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:c250 p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.809579] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256380 was disconnected and freed. reset controller. 00:24:15.395 [2024-12-14 17:27:11.809604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183500 00:24:15.395 [2024-12-14 17:27:11.809617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.809634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183c00 00:24:15.395 [2024-12-14 17:27:11.809647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.809662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183c00 00:24:15.395 [2024-12-14 17:27:11.809677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.809692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183c00 00:24:15.395 [2024-12-14 17:27:11.809704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.809719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183800 00:24:15.395 [2024-12-14 17:27:11.809732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.809746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183600 00:24:15.395 [2024-12-14 17:27:11.809759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.809773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aadfd80 len:0x10000 key:0x183600 00:24:15.395 [2024-12-14 17:27:11.809786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.809800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183800 00:24:15.395 [2024-12-14 17:27:11.809812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.809827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183c00 00:24:15.395 [2024-12-14 17:27:11.809840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.809854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x183600 00:24:15.395 [2024-12-14 17:27:11.809867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.809882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183800 00:24:15.395 [2024-12-14 17:27:11.809894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.395 [2024-12-14 17:27:11.809908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183800 00:24:15.395 [2024-12-14 17:27:11.809921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.809935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183c00 00:24:15.396 [2024-12-14 17:27:11.809948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.809962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183c00 00:24:15.396 [2024-12-14 17:27:11.809977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.809991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183c00 00:24:15.396 [2024-12-14 17:27:11.810004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183c00 00:24:15.396 [2024-12-14 17:27:11.810031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183800 00:24:15.396 [2024-12-14 17:27:11.810059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa9fb80 len:0x10000 key:0x183600 00:24:15.396 [2024-12-14 17:27:11.810086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183c00 00:24:15.396 [2024-12-14 17:27:11.810113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183600 00:24:15.396 [2024-12-14 17:27:11.810141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x183600 00:24:15.396 [2024-12-14 17:27:11.810168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183800 00:24:15.396 [2024-12-14 17:27:11.810195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183800 00:24:15.396 [2024-12-14 17:27:11.810223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183500 00:24:15.396 [2024-12-14 17:27:11.810265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183c00 00:24:15.396 [2024-12-14 17:27:11.810294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183c00 00:24:15.396 [2024-12-14 17:27:11.810321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183800 00:24:15.396 [2024-12-14 17:27:11.810348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183c00 00:24:15.396 [2024-12-14 17:27:11.810375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183c00 00:24:15.396 [2024-12-14 17:27:11.810403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183800 00:24:15.396 [2024-12-14 17:27:11.810430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183800 00:24:15.396 [2024-12-14 17:27:11.810456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x183600 00:24:15.396 [2024-12-14 17:27:11.810483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183500 00:24:15.396 [2024-12-14 17:27:11.810517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183c00 00:24:15.396 [2024-12-14 17:27:11.810544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183600 00:24:15.396 [2024-12-14 17:27:11.810572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183c00 00:24:15.396 [2024-12-14 17:27:11.810600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x184300 00:24:15.396 [2024-12-14 17:27:11.810628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x184300 00:24:15.396 [2024-12-14 17:27:11.810655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.810669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b56b000 len:0x10000 key:0x184300 00:24:15.396 [2024-12-14 17:27:11.817423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.817463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b54a000 len:0x10000 key:0x184300 00:24:15.396 [2024-12-14 17:27:11.817479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.817495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011973000 len:0x10000 key:0x184300 00:24:15.396 [2024-12-14 17:27:11.817517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.817532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df50000 len:0x10000 key:0x184300 00:24:15.396 [2024-12-14 17:27:11.817544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.817559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df71000 len:0x10000 key:0x184300 00:24:15.396 [2024-12-14 17:27:11.817571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.817586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df92000 len:0x10000 key:0x184300 00:24:15.396 [2024-12-14 17:27:11.817598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.817612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0bb000 len:0x10000 key:0x184300 00:24:15.396 [2024-12-14 17:27:11.817625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.817639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e09a000 len:0x10000 key:0x184300 00:24:15.396 [2024-12-14 17:27:11.817651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.817666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e079000 len:0x10000 key:0x184300 00:24:15.396 [2024-12-14 17:27:11.817682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.817697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e058000 len:0x10000 key:0x184300 00:24:15.396 [2024-12-14 17:27:11.817710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.396 [2024-12-14 17:27:11.817725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e037000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.817737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.817751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e016000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.817764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.817778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dff5000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.817790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.817805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfd4000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.817817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.817831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fa3000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.817843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.817858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f82000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.817870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.817886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f61000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.817898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.817912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f40000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.817925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.817939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131af000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.817951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.817966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001318e000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.817978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.817995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c126000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.818008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.818022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfa000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.818034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.818049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbd9000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.818061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.818075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbb8000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.818088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.818102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb97000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.818114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.818128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb76000 len:0x10000 key:0x184300 00:24:15.397 [2024-12-14 17:27:11.818140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:b42c p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820524] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256140 was disconnected and freed. reset controller. 00:24:15.397 [2024-12-14 17:27:11.820568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183a00 00:24:15.397 [2024-12-14 17:27:11.820585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183300 00:24:15.397 [2024-12-14 17:27:11.820621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183300 00:24:15.397 [2024-12-14 17:27:11.820649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183a00 00:24:15.397 [2024-12-14 17:27:11.820677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183300 00:24:15.397 [2024-12-14 17:27:11.820708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183a00 00:24:15.397 [2024-12-14 17:27:11.820735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183a00 00:24:15.397 [2024-12-14 17:27:11.820763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183300 00:24:15.397 [2024-12-14 17:27:11.820790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183300 00:24:15.397 [2024-12-14 17:27:11.820817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183a00 00:24:15.397 [2024-12-14 17:27:11.820843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183900 00:24:15.397 [2024-12-14 17:27:11.820870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183300 00:24:15.397 [2024-12-14 17:27:11.820898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183a00 00:24:15.397 [2024-12-14 17:27:11.820924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183a00 00:24:15.397 [2024-12-14 17:27:11.820951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183300 00:24:15.397 [2024-12-14 17:27:11.820978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.820992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183300 00:24:15.397 [2024-12-14 17:27:11.821007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.821022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183300 00:24:15.397 [2024-12-14 17:27:11.821034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.821049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183a00 00:24:15.397 [2024-12-14 17:27:11.821061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.821076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183300 00:24:15.397 [2024-12-14 17:27:11.821088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.397 [2024-12-14 17:27:11.821103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183a00 00:24:15.397 [2024-12-14 17:27:11.821115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183a00 00:24:15.398 [2024-12-14 17:27:11.821142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183300 00:24:15.398 [2024-12-14 17:27:11.821169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183300 00:24:15.398 [2024-12-14 17:27:11.821196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183a00 00:24:15.398 [2024-12-14 17:27:11.821223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183300 00:24:15.398 [2024-12-14 17:27:11.821250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183a00 00:24:15.398 [2024-12-14 17:27:11.821279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183300 00:24:15.398 [2024-12-14 17:27:11.821306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183a00 00:24:15.398 [2024-12-14 17:27:11.821334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183a00 00:24:15.398 [2024-12-14 17:27:11.821361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183300 00:24:15.398 [2024-12-14 17:27:11.821388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183300 00:24:15.398 [2024-12-14 17:27:11.821415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183a00 00:24:15.398 [2024-12-14 17:27:11.821441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183a00 00:24:15.398 [2024-12-14 17:27:11.821468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183300 00:24:15.398 [2024-12-14 17:27:11.821495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183a00 00:24:15.398 [2024-12-14 17:27:11.821528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183a00 00:24:15.398 [2024-12-14 17:27:11.821555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001021e000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b652000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b631000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e436000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e457000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e478000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011cac000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c8b000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c6a000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8c5000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8a4000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b883000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b862000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b841000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.821978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b820000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.821990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.822005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124aa000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.822017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.822032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c315000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.822044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.822059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2f4000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.822071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.398 [2024-12-14 17:27:11.822085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2d3000 len:0x10000 key:0x184300 00:24:15.398 [2024-12-14 17:27:11.822098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.822112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2b2000 len:0x10000 key:0x184300 00:24:15.399 [2024-12-14 17:27:11.822125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.822140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c291000 len:0x10000 key:0x184300 00:24:15.399 [2024-12-14 17:27:11.822153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.822168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c270000 len:0x10000 key:0x184300 00:24:15.399 [2024-12-14 17:27:11.822180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.822195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ef000 len:0x10000 key:0x184300 00:24:15.399 [2024-12-14 17:27:11.822207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.822223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ce000 len:0x10000 key:0x184300 00:24:15.399 [2024-12-14 17:27:11.822236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.822250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ad000 len:0x10000 key:0x184300 00:24:15.399 [2024-12-14 17:27:11.822262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.822277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d68c000 len:0x10000 key:0x184300 00:24:15.399 [2024-12-14 17:27:11.822289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.822304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce0a000 len:0x10000 key:0x184300 00:24:15.399 [2024-12-14 17:27:11.822316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:dfac p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824281] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806c00 was disconnected and freed. reset controller. 00:24:15.399 [2024-12-14 17:27:11.824308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184400 00:24:15.399 [2024-12-14 17:27:11.824379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184400 00:24:15.399 [2024-12-14 17:27:11.824433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183900 00:24:15.399 [2024-12-14 17:27:11.824487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183900 00:24:15.399 [2024-12-14 17:27:11.824526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184400 00:24:15.399 [2024-12-14 17:27:11.824553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184400 00:24:15.399 [2024-12-14 17:27:11.824580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183900 00:24:15.399 [2024-12-14 17:27:11.824662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183900 00:24:15.399 [2024-12-14 17:27:11.824689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183900 00:24:15.399 [2024-12-14 17:27:11.824769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184400 00:24:15.399 [2024-12-14 17:27:11.824853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184400 00:24:15.399 [2024-12-14 17:27:11.824880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.399 [2024-12-14 17:27:11.824948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183200 00:24:15.399 [2024-12-14 17:27:11.824961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.824975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184400 00:24:15.400 [2024-12-14 17:27:11.824987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184400 00:24:15.400 [2024-12-14 17:27:11.825014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184400 00:24:15.400 [2024-12-14 17:27:11.825068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184400 00:24:15.400 [2024-12-14 17:27:11.825095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183900 00:24:15.400 [2024-12-14 17:27:11.825150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184400 00:24:15.400 [2024-12-14 17:27:11.825205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183900 00:24:15.400 [2024-12-14 17:27:11.825258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184400 00:24:15.400 [2024-12-14 17:27:11.825422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183900 00:24:15.400 [2024-12-14 17:27:11.825449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183900 00:24:15.400 [2024-12-14 17:27:11.825583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183200 00:24:15.400 [2024-12-14 17:27:11.825638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184400 00:24:15.400 [2024-12-14 17:27:11.825665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebb0000 len:0x10000 key:0x184300 00:24:15.400 [2024-12-14 17:27:11.825692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebd1000 len:0x10000 key:0x184300 00:24:15.400 [2024-12-14 17:27:11.825719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ca8000 len:0x10000 key:0x184300 00:24:15.400 [2024-12-14 17:27:11.825748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cc9000 len:0x10000 key:0x184300 00:24:15.400 [2024-12-14 17:27:11.825775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cea000 len:0x10000 key:0x184300 00:24:15.400 [2024-12-14 17:27:11.825802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d0b000 len:0x10000 key:0x184300 00:24:15.400 [2024-12-14 17:27:11.825828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d2c000 len:0x10000 key:0x184300 00:24:15.400 [2024-12-14 17:27:11.825855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0dc000 len:0x10000 key:0x184300 00:24:15.400 [2024-12-14 17:27:11.825883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001065f000 len:0x10000 key:0x184300 00:24:15.400 [2024-12-14 17:27:11.825910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x184300 00:24:15.400 [2024-12-14 17:27:11.825937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e13000 len:0x10000 key:0x184300 00:24:15.400 [2024-12-14 17:27:11.825963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.400 [2024-12-14 17:27:11.825979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012df2000 len:0x10000 key:0x184300 00:24:15.401 [2024-12-14 17:27:11.825991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.826006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab4000 len:0x10000 key:0x184300 00:24:15.401 [2024-12-14 17:27:11.826018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.826033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba93000 len:0x10000 key:0x184300 00:24:15.401 [2024-12-14 17:27:11.826047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.826063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba72000 len:0x10000 key:0x184300 00:24:15.401 [2024-12-14 17:27:11.826075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64805 cdw0:dbba6000 sqhd:464a p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.828291] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8069c0 was disconnected and freed. reset controller. 00:24:15.401 [2024-12-14 17:27:11.828361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.828376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:74e0 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.828390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.828402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:74e0 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.828415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.828427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:74e0 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.828440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.828452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:74e0 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.830492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.401 [2024-12-14 17:27:11.830536] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:15.401 [2024-12-14 17:27:11.830549] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.401 [2024-12-14 17:27:11.830570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.830583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:8d10 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.830603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.830615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:8d10 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.830628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.830640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:8d10 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.830653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.830665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:8d10 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.832544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.401 [2024-12-14 17:27:11.832586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:15.401 [2024-12-14 17:27:11.832617] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.401 [2024-12-14 17:27:11.832659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.832672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:1386 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.832685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.832697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:1386 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.832710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.832722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:1386 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.832735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.832747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:1386 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.835032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.401 [2024-12-14 17:27:11.835072] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:15.401 [2024-12-14 17:27:11.835102] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.401 [2024-12-14 17:27:11.835146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.835177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:9542 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.835208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.835238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:9542 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.835270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.835303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:9542 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.835316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.835328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:9542 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.837873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.401 [2024-12-14 17:27:11.837913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.401 [2024-12-14 17:27:11.837942] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.401 [2024-12-14 17:27:11.837988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.838019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:bda0 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.838050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.838080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:bda0 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.838119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.838149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:bda0 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.838181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.838210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:bda0 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.840539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.401 [2024-12-14 17:27:11.840556] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:15.401 [2024-12-14 17:27:11.840568] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.401 [2024-12-14 17:27:11.840585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.840598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:b17c p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.840611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.840623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:b17c p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.840636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.840648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:b17c p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.840661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.840673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:b17c p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.842552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.401 [2024-12-14 17:27:11.842592] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:15.401 [2024-12-14 17:27:11.842603] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.401 [2024-12-14 17:27:11.842621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.842633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:25a2 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.842646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.842659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:25a2 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.842671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.842683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:25a2 p:1 m:0 dnr:0 00:24:15.401 [2024-12-14 17:27:11.842696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.401 [2024-12-14 17:27:11.842708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:25a2 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.844419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.402 [2024-12-14 17:27:11.844460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:15.402 [2024-12-14 17:27:11.844488] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.402 [2024-12-14 17:27:11.844653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.402 [2024-12-14 17:27:11.844686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:66b0 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.844718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.402 [2024-12-14 17:27:11.844748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:66b0 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.844780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.402 [2024-12-14 17:27:11.844810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:66b0 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.844841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.402 [2024-12-14 17:27:11.844872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:66b0 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.846933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.402 [2024-12-14 17:27:11.846973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:15.402 [2024-12-14 17:27:11.847002] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.402 [2024-12-14 17:27:11.847046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.402 [2024-12-14 17:27:11.847078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:72d2 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.847110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.402 [2024-12-14 17:27:11.847139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:72d2 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.847170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.402 [2024-12-14 17:27:11.847200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:72d2 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.847231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.402 [2024-12-14 17:27:11.847260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:72d2 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.849191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.402 [2024-12-14 17:27:11.849231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:15.402 [2024-12-14 17:27:11.849260] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.402 [2024-12-14 17:27:11.849305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.402 [2024-12-14 17:27:11.849321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:87f0 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.849334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.402 [2024-12-14 17:27:11.849346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:87f0 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.849359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.402 [2024-12-14 17:27:11.849371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:87f0 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.849383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.402 [2024-12-14 17:27:11.849395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64805 cdw0:0 sqhd:87f0 p:1 m:0 dnr:0 00:24:15.402 [2024-12-14 17:27:11.867345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.402 [2024-12-14 17:27:11.867397] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:15.402 [2024-12-14 17:27:11.867428] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.402 [2024-12-14 17:27:11.875723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.402 [2024-12-14 17:27:11.875748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:15.402 [2024-12-14 17:27:11.875759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:15.402 [2024-12-14 17:27:11.875800] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.402 [2024-12-14 17:27:11.875816] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.402 [2024-12-14 17:27:11.875828] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.402 [2024-12-14 17:27:11.875840] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.402 [2024-12-14 17:27:11.875852] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.402 [2024-12-14 17:27:11.875863] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.402 [2024-12-14 17:27:11.875875] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.402 [2024-12-14 17:27:11.875955] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:15.402 [2024-12-14 17:27:11.875966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:15.402 [2024-12-14 17:27:11.875976] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:15.402 [2024-12-14 17:27:11.875989] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:15.402 [2024-12-14 17:27:11.878071] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:15.402 task offset: 87168 on job bdev=Nvme1n1 fails 00:24:15.402 00:24:15.402 Latency(us) 00:24:15.402 [2024-12-14T16:27:12.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme1n1 ended in about 2.02 seconds with error 00:24:15.402 Verification LBA range: start 0x0 length 0x400 00:24:15.402 Nvme1n1 : 2.02 323.36 20.21 31.74 0.00 179652.85 42991.62 1073741.82 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme2n1 ended in about 2.02 seconds with error 00:24:15.402 Verification LBA range: start 0x0 length 0x400 00:24:15.402 Nvme2n1 : 2.02 332.14 20.76 31.73 0.00 174750.48 40684.75 1073741.82 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme3n1 ended in about 2.02 seconds with error 00:24:15.402 Verification LBA range: start 0x0 length 0x400 00:24:15.402 Nvme3n1 : 2.02 333.49 20.84 31.71 0.00 173629.67 15833.50 1134139.80 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme4n1 ended in about 2.02 seconds with error 00:24:15.402 Verification LBA range: start 0x0 length 0x400 00:24:15.402 Nvme4n1 : 2.02 335.33 20.96 31.70 0.00 172297.45 19293.80 1134139.80 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme5n1 ended in about 2.02 seconds with error 00:24:15.402 Verification LBA range: start 0x0 length 0x400 00:24:15.402 Nvme5n1 : 2.02 317.87 19.87 31.69 0.00 180306.21 46137.34 1134139.80 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme6n1 ended in about 2.02 seconds with error 00:24:15.402 Verification LBA range: start 0x0 length 0x400 00:24:15.402 Nvme6n1 : 2.02 310.82 19.43 31.68 0.00 183361.70 46976.20 1127428.92 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme7n1 ended in about 2.02 seconds with error 00:24:15.402 Verification LBA range: start 0x0 length 0x400 00:24:15.402 Nvme7n1 : 2.02 310.70 19.42 31.66 0.00 182811.19 47395.64 1120718.03 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme8n1 ended in about 2.02 seconds with error 00:24:15.402 Verification LBA range: start 0x0 length 0x400 00:24:15.402 Nvme8n1 : 2.02 310.58 19.41 31.65 0.00 182270.54 46976.20 1114007.14 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.402 [2024-12-14T16:27:12.086Z] Job: Nvme9n1 ended in about 2.02 seconds with error 00:24:15.402 Verification LBA range: start 0x0 length 0x400 00:24:15.403 Nvme9n1 : 2.02 310.45 19.40 31.64 0.00 181918.09 45508.20 1114007.14 00:24:15.403 [2024-12-14T16:27:12.087Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.403 [2024-12-14T16:27:12.087Z] Job: Nvme10n1 ended in about 2.02 seconds with error 00:24:15.403 Verification LBA range: start 0x0 length 0x400 00:24:15.403 Nvme10n1 : 2.02 207.05 12.94 31.63 0.00 259774.17 45088.77 1107296.26 00:24:15.403 [2024-12-14T16:27:12.087Z] =================================================================================================================== 00:24:15.403 [2024-12-14T16:27:12.087Z] Total : 3091.78 193.24 316.82 0.00 184562.53 15833.50 1134139.80 00:24:15.403 [2024-12-14 17:27:11.897534] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:15.403 [2024-12-14 17:27:11.897557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:15.403 [2024-12-14 17:27:11.897570] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:15.403 [2024-12-14 17:27:11.906962] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:15.403 [2024-12-14 17:27:11.907021] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:15.403 [2024-12-14 17:27:11.907048] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:24:15.403 [2024-12-14 17:27:11.907172] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:15.403 [2024-12-14 17:27:11.907207] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:15.403 [2024-12-14 17:27:11.907231] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e53c0 00:24:15.403 [2024-12-14 17:27:11.907366] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:15.403 [2024-12-14 17:27:11.907400] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:15.403 [2024-12-14 17:27:11.907424] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba580 00:24:15.403 [2024-12-14 17:27:11.910937] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:15.403 [2024-12-14 17:27:11.910985] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:15.403 [2024-12-14 17:27:11.911010] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192dc7c0 00:24:15.403 [2024-12-14 17:27:11.911134] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:15.403 [2024-12-14 17:27:11.911168] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:15.403 [2024-12-14 17:27:11.911192] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6100 00:24:15.403 [2024-12-14 17:27:11.911296] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:15.403 [2024-12-14 17:27:11.911329] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:15.403 [2024-12-14 17:27:11.911354] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd540 00:24:15.403 [2024-12-14 17:27:11.911473] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:15.403 [2024-12-14 17:27:11.911518] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:15.403 [2024-12-14 17:27:11.911542] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a89c0 00:24:15.403 [2024-12-14 17:27:11.912288] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:15.403 [2024-12-14 17:27:11.912305] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:15.403 [2024-12-14 17:27:11.912315] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f500 00:24:15.403 [2024-12-14 17:27:11.912403] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:15.403 [2024-12-14 17:27:11.912417] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:15.403 [2024-12-14 17:27:11.912428] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e180 00:24:15.403 [2024-12-14 17:27:11.912527] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:15.403 [2024-12-14 17:27:11.912541] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:15.403 [2024-12-14 17:27:11.912551] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c180 00:24:15.662 17:27:12 -- target/shutdown.sh@141 -- # kill -9 1437704 00:24:15.662 17:27:12 -- target/shutdown.sh@143 -- # stoptarget 00:24:15.662 17:27:12 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:15.662 17:27:12 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:15.662 17:27:12 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:15.662 17:27:12 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:15.662 17:27:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:15.662 17:27:12 -- nvmf/common.sh@116 -- # sync 00:24:15.662 17:27:12 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:15.662 17:27:12 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:15.662 17:27:12 -- nvmf/common.sh@119 -- # set +e 00:24:15.662 17:27:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:15.662 17:27:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:15.662 rmmod nvme_rdma 00:24:15.662 rmmod nvme_fabrics 00:24:15.663 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 120: 1437704 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:24:15.663 17:27:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:15.663 17:27:12 -- nvmf/common.sh@123 -- # set -e 00:24:15.663 17:27:12 -- nvmf/common.sh@124 -- # return 0 00:24:15.663 17:27:12 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:24:15.663 17:27:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:15.663 17:27:12 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:15.663 00:24:15.663 real 0m5.326s 00:24:15.663 user 0m18.386s 00:24:15.663 sys 0m1.310s 00:24:15.663 17:27:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:15.663 17:27:12 -- common/autotest_common.sh@10 -- # set +x 00:24:15.663 ************************************ 00:24:15.663 END TEST nvmf_shutdown_tc3 00:24:15.663 ************************************ 00:24:15.663 17:27:12 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:24:15.663 00:24:15.663 real 0m25.118s 00:24:15.663 user 1m14.943s 00:24:15.663 sys 0m8.931s 00:24:15.663 17:27:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:15.663 17:27:12 -- common/autotest_common.sh@10 -- # set +x 00:24:15.663 ************************************ 00:24:15.663 END TEST nvmf_shutdown 00:24:15.663 ************************************ 00:24:15.922 17:27:12 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:15.922 17:27:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:15.922 17:27:12 -- common/autotest_common.sh@10 -- # set +x 00:24:15.922 17:27:12 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:15.922 17:27:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:15.922 17:27:12 -- common/autotest_common.sh@10 -- # set +x 00:24:15.922 17:27:12 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:15.922 17:27:12 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:15.922 17:27:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:15.922 17:27:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:15.922 17:27:12 -- common/autotest_common.sh@10 -- # set +x 00:24:15.922 ************************************ 00:24:15.922 START TEST nvmf_multicontroller 00:24:15.922 ************************************ 00:24:15.922 17:27:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:15.922 * Looking for test storage... 00:24:15.922 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:15.922 17:27:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:15.922 17:27:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:15.922 17:27:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:15.922 17:27:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:15.922 17:27:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:15.922 17:27:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:15.922 17:27:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:15.922 17:27:12 -- scripts/common.sh@335 -- # IFS=.-: 00:24:15.922 17:27:12 -- scripts/common.sh@335 -- # read -ra ver1 00:24:15.922 17:27:12 -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.922 17:27:12 -- scripts/common.sh@336 -- # read -ra ver2 00:24:15.922 17:27:12 -- scripts/common.sh@337 -- # local 'op=<' 00:24:15.922 17:27:12 -- scripts/common.sh@339 -- # ver1_l=2 00:24:15.922 17:27:12 -- scripts/common.sh@340 -- # ver2_l=1 00:24:15.922 17:27:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:15.922 17:27:12 -- scripts/common.sh@343 -- # case "$op" in 00:24:15.922 17:27:12 -- scripts/common.sh@344 -- # : 1 00:24:15.922 17:27:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:15.922 17:27:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.922 17:27:12 -- scripts/common.sh@364 -- # decimal 1 00:24:15.922 17:27:12 -- scripts/common.sh@352 -- # local d=1 00:24:15.922 17:27:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.922 17:27:12 -- scripts/common.sh@354 -- # echo 1 00:24:15.922 17:27:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:15.922 17:27:12 -- scripts/common.sh@365 -- # decimal 2 00:24:15.922 17:27:12 -- scripts/common.sh@352 -- # local d=2 00:24:15.922 17:27:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.922 17:27:12 -- scripts/common.sh@354 -- # echo 2 00:24:16.182 17:27:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:16.182 17:27:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:16.182 17:27:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:16.182 17:27:12 -- scripts/common.sh@367 -- # return 0 00:24:16.182 17:27:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.182 17:27:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:16.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.182 --rc genhtml_branch_coverage=1 00:24:16.182 --rc genhtml_function_coverage=1 00:24:16.182 --rc genhtml_legend=1 00:24:16.182 --rc geninfo_all_blocks=1 00:24:16.182 --rc geninfo_unexecuted_blocks=1 00:24:16.182 00:24:16.182 ' 00:24:16.182 17:27:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:16.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.182 --rc genhtml_branch_coverage=1 00:24:16.182 --rc genhtml_function_coverage=1 00:24:16.182 --rc genhtml_legend=1 00:24:16.182 --rc geninfo_all_blocks=1 00:24:16.182 --rc geninfo_unexecuted_blocks=1 00:24:16.182 00:24:16.182 ' 00:24:16.182 17:27:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:16.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.182 --rc genhtml_branch_coverage=1 00:24:16.182 --rc genhtml_function_coverage=1 00:24:16.182 --rc genhtml_legend=1 00:24:16.182 --rc geninfo_all_blocks=1 00:24:16.182 --rc geninfo_unexecuted_blocks=1 00:24:16.182 00:24:16.182 ' 00:24:16.182 17:27:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:16.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.182 --rc genhtml_branch_coverage=1 00:24:16.182 --rc genhtml_function_coverage=1 00:24:16.182 --rc genhtml_legend=1 00:24:16.182 --rc geninfo_all_blocks=1 00:24:16.182 --rc geninfo_unexecuted_blocks=1 00:24:16.182 00:24:16.182 ' 00:24:16.182 17:27:12 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.182 17:27:12 -- nvmf/common.sh@7 -- # uname -s 00:24:16.182 17:27:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.182 17:27:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.182 17:27:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.182 17:27:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.182 17:27:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.182 17:27:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.182 17:27:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.182 17:27:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.182 17:27:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.182 17:27:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.182 17:27:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:16.182 17:27:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:16.182 17:27:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.182 17:27:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.182 17:27:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.182 17:27:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:16.182 17:27:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.182 17:27:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.182 17:27:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.182 17:27:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.182 17:27:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.182 17:27:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.182 17:27:12 -- paths/export.sh@5 -- # export PATH 00:24:16.182 17:27:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.182 17:27:12 -- nvmf/common.sh@46 -- # : 0 00:24:16.182 17:27:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:16.182 17:27:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:16.182 17:27:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:16.182 17:27:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.182 17:27:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.182 17:27:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:16.182 17:27:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:16.182 17:27:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:16.182 17:27:12 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:16.182 17:27:12 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:16.182 17:27:12 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:16.182 17:27:12 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:16.182 17:27:12 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:16.182 17:27:12 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:24:16.182 17:27:12 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:16.182 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:16.182 17:27:12 -- host/multicontroller.sh@20 -- # exit 0 00:24:16.182 00:24:16.182 real 0m0.201s 00:24:16.182 user 0m0.110s 00:24:16.182 sys 0m0.108s 00:24:16.182 17:27:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:16.182 17:27:12 -- common/autotest_common.sh@10 -- # set +x 00:24:16.182 ************************************ 00:24:16.182 END TEST nvmf_multicontroller 00:24:16.182 ************************************ 00:24:16.182 17:27:12 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:16.182 17:27:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:16.182 17:27:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:16.182 17:27:12 -- common/autotest_common.sh@10 -- # set +x 00:24:16.182 ************************************ 00:24:16.182 START TEST nvmf_aer 00:24:16.182 ************************************ 00:24:16.183 17:27:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:16.183 * Looking for test storage... 00:24:16.183 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:16.183 17:27:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:16.183 17:27:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:16.183 17:27:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:16.183 17:27:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:16.183 17:27:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:16.183 17:27:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:16.183 17:27:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:16.183 17:27:12 -- scripts/common.sh@335 -- # IFS=.-: 00:24:16.183 17:27:12 -- scripts/common.sh@335 -- # read -ra ver1 00:24:16.183 17:27:12 -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.183 17:27:12 -- scripts/common.sh@336 -- # read -ra ver2 00:24:16.183 17:27:12 -- scripts/common.sh@337 -- # local 'op=<' 00:24:16.183 17:27:12 -- scripts/common.sh@339 -- # ver1_l=2 00:24:16.183 17:27:12 -- scripts/common.sh@340 -- # ver2_l=1 00:24:16.183 17:27:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:16.183 17:27:12 -- scripts/common.sh@343 -- # case "$op" in 00:24:16.183 17:27:12 -- scripts/common.sh@344 -- # : 1 00:24:16.183 17:27:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:16.183 17:27:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.183 17:27:12 -- scripts/common.sh@364 -- # decimal 1 00:24:16.183 17:27:12 -- scripts/common.sh@352 -- # local d=1 00:24:16.183 17:27:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.183 17:27:12 -- scripts/common.sh@354 -- # echo 1 00:24:16.183 17:27:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:16.183 17:27:12 -- scripts/common.sh@365 -- # decimal 2 00:24:16.183 17:27:12 -- scripts/common.sh@352 -- # local d=2 00:24:16.183 17:27:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.183 17:27:12 -- scripts/common.sh@354 -- # echo 2 00:24:16.183 17:27:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:16.183 17:27:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:16.183 17:27:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:16.183 17:27:12 -- scripts/common.sh@367 -- # return 0 00:24:16.183 17:27:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.183 17:27:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:16.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.183 --rc genhtml_branch_coverage=1 00:24:16.183 --rc genhtml_function_coverage=1 00:24:16.183 --rc genhtml_legend=1 00:24:16.183 --rc geninfo_all_blocks=1 00:24:16.183 --rc geninfo_unexecuted_blocks=1 00:24:16.183 00:24:16.183 ' 00:24:16.183 17:27:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:16.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.183 --rc genhtml_branch_coverage=1 00:24:16.183 --rc genhtml_function_coverage=1 00:24:16.183 --rc genhtml_legend=1 00:24:16.183 --rc geninfo_all_blocks=1 00:24:16.183 --rc geninfo_unexecuted_blocks=1 00:24:16.183 00:24:16.183 ' 00:24:16.183 17:27:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:16.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.183 --rc genhtml_branch_coverage=1 00:24:16.183 --rc genhtml_function_coverage=1 00:24:16.183 --rc genhtml_legend=1 00:24:16.183 --rc geninfo_all_blocks=1 00:24:16.183 --rc geninfo_unexecuted_blocks=1 00:24:16.183 00:24:16.183 ' 00:24:16.183 17:27:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:16.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.183 --rc genhtml_branch_coverage=1 00:24:16.183 --rc genhtml_function_coverage=1 00:24:16.183 --rc genhtml_legend=1 00:24:16.183 --rc geninfo_all_blocks=1 00:24:16.183 --rc geninfo_unexecuted_blocks=1 00:24:16.183 00:24:16.183 ' 00:24:16.183 17:27:12 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.443 17:27:12 -- nvmf/common.sh@7 -- # uname -s 00:24:16.443 17:27:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.443 17:27:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.443 17:27:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.443 17:27:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.443 17:27:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.443 17:27:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.443 17:27:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.443 17:27:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.443 17:27:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.443 17:27:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.443 17:27:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:16.443 17:27:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:16.443 17:27:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.443 17:27:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.443 17:27:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.443 17:27:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:16.443 17:27:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.443 17:27:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.443 17:27:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.443 17:27:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.443 17:27:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.443 17:27:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.443 17:27:12 -- paths/export.sh@5 -- # export PATH 00:24:16.443 17:27:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.443 17:27:12 -- nvmf/common.sh@46 -- # : 0 00:24:16.443 17:27:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:16.443 17:27:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:16.443 17:27:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:16.443 17:27:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.443 17:27:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.443 17:27:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:16.443 17:27:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:16.443 17:27:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:16.443 17:27:12 -- host/aer.sh@11 -- # nvmftestinit 00:24:16.443 17:27:12 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:16.443 17:27:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.443 17:27:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:16.443 17:27:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:16.443 17:27:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:16.443 17:27:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.443 17:27:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.443 17:27:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.443 17:27:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:16.443 17:27:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:16.443 17:27:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:16.443 17:27:12 -- common/autotest_common.sh@10 -- # set +x 00:24:23.016 17:27:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:23.016 17:27:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:23.016 17:27:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:23.016 17:27:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:23.016 17:27:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:23.016 17:27:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:23.016 17:27:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:23.016 17:27:19 -- nvmf/common.sh@294 -- # net_devs=() 00:24:23.016 17:27:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:23.016 17:27:19 -- nvmf/common.sh@295 -- # e810=() 00:24:23.016 17:27:19 -- nvmf/common.sh@295 -- # local -ga e810 00:24:23.016 17:27:19 -- nvmf/common.sh@296 -- # x722=() 00:24:23.016 17:27:19 -- nvmf/common.sh@296 -- # local -ga x722 00:24:23.016 17:27:19 -- nvmf/common.sh@297 -- # mlx=() 00:24:23.016 17:27:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:23.016 17:27:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.016 17:27:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.016 17:27:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.016 17:27:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.016 17:27:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.016 17:27:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.016 17:27:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.016 17:27:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.016 17:27:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.016 17:27:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.016 17:27:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.016 17:27:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:23.016 17:27:19 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:23.016 17:27:19 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:23.016 17:27:19 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:23.016 17:27:19 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:23.016 17:27:19 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:23.016 17:27:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:23.016 17:27:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:23.016 17:27:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:23.016 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:23.016 17:27:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:23.016 17:27:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:23.016 17:27:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:23.016 17:27:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:23.016 17:27:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:23.016 17:27:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:23.016 17:27:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:23.016 17:27:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:23.016 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:23.016 17:27:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:23.016 17:27:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:23.017 17:27:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:23.017 17:27:19 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:23.017 17:27:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.017 17:27:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:23.017 17:27:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.017 17:27:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:23.017 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:23.017 17:27:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.017 17:27:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:23.017 17:27:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.017 17:27:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:23.017 17:27:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.017 17:27:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:23.017 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:23.017 17:27:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.017 17:27:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:23.017 17:27:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:23.017 17:27:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:23.017 17:27:19 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:23.017 17:27:19 -- nvmf/common.sh@57 -- # uname 00:24:23.017 17:27:19 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:23.017 17:27:19 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:23.017 17:27:19 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:23.017 17:27:19 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:23.017 17:27:19 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:23.017 17:27:19 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:23.017 17:27:19 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:23.017 17:27:19 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:23.017 17:27:19 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:23.017 17:27:19 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:23.017 17:27:19 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:23.017 17:27:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:23.017 17:27:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:23.017 17:27:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:23.017 17:27:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:23.017 17:27:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:23.017 17:27:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:23.017 17:27:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.017 17:27:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:23.017 17:27:19 -- nvmf/common.sh@104 -- # continue 2 00:24:23.017 17:27:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:23.017 17:27:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.017 17:27:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.017 17:27:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:23.017 17:27:19 -- nvmf/common.sh@104 -- # continue 2 00:24:23.017 17:27:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:23.017 17:27:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:23.017 17:27:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:23.017 17:27:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:23.017 17:27:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:23.017 17:27:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:23.017 17:27:19 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:23.017 17:27:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:23.017 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:23.017 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:23.017 altname enp217s0f0np0 00:24:23.017 altname ens818f0np0 00:24:23.017 inet 192.168.100.8/24 scope global mlx_0_0 00:24:23.017 valid_lft forever preferred_lft forever 00:24:23.017 17:27:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:23.017 17:27:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:23.017 17:27:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:23.017 17:27:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:23.017 17:27:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:23.017 17:27:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:23.017 17:27:19 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:23.017 17:27:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:23.017 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:23.017 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:23.017 altname enp217s0f1np1 00:24:23.017 altname ens818f1np1 00:24:23.017 inet 192.168.100.9/24 scope global mlx_0_1 00:24:23.017 valid_lft forever preferred_lft forever 00:24:23.017 17:27:19 -- nvmf/common.sh@410 -- # return 0 00:24:23.017 17:27:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:23.017 17:27:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:23.017 17:27:19 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:23.017 17:27:19 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:23.017 17:27:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:23.017 17:27:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:23.017 17:27:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:23.017 17:27:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:23.017 17:27:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:23.017 17:27:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:23.017 17:27:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.017 17:27:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:23.017 17:27:19 -- nvmf/common.sh@104 -- # continue 2 00:24:23.017 17:27:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:23.017 17:27:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.017 17:27:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.017 17:27:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:23.017 17:27:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:23.017 17:27:19 -- nvmf/common.sh@104 -- # continue 2 00:24:23.017 17:27:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:23.017 17:27:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:23.017 17:27:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:23.017 17:27:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:23.017 17:27:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:23.017 17:27:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:23.017 17:27:19 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:23.017 17:27:19 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:23.017 17:27:19 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:23.017 17:27:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:23.017 17:27:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:23.017 17:27:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:23.017 17:27:19 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:23.017 192.168.100.9' 00:24:23.017 17:27:19 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:23.017 192.168.100.9' 00:24:23.017 17:27:19 -- nvmf/common.sh@445 -- # head -n 1 00:24:23.017 17:27:19 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:23.017 17:27:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:23.017 192.168.100.9' 00:24:23.017 17:27:19 -- nvmf/common.sh@446 -- # tail -n +2 00:24:23.017 17:27:19 -- nvmf/common.sh@446 -- # head -n 1 00:24:23.017 17:27:19 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:23.017 17:27:19 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:23.017 17:27:19 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:23.017 17:27:19 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:23.017 17:27:19 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:23.017 17:27:19 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:23.017 17:27:19 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:23.017 17:27:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:23.017 17:27:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:23.017 17:27:19 -- common/autotest_common.sh@10 -- # set +x 00:24:23.017 17:27:19 -- nvmf/common.sh@469 -- # nvmfpid=1441766 00:24:23.017 17:27:19 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:23.017 17:27:19 -- nvmf/common.sh@470 -- # waitforlisten 1441766 00:24:23.017 17:27:19 -- common/autotest_common.sh@829 -- # '[' -z 1441766 ']' 00:24:23.017 17:27:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.017 17:27:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.017 17:27:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.017 17:27:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.017 17:27:19 -- common/autotest_common.sh@10 -- # set +x 00:24:23.277 [2024-12-14 17:27:19.733905] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:23.277 [2024-12-14 17:27:19.733957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.277 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.277 [2024-12-14 17:27:19.804420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:23.277 [2024-12-14 17:27:19.842277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:23.277 [2024-12-14 17:27:19.842403] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.277 [2024-12-14 17:27:19.842417] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.277 [2024-12-14 17:27:19.842426] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.277 [2024-12-14 17:27:19.842478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.277 [2024-12-14 17:27:19.842557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.277 [2024-12-14 17:27:19.842626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.277 [2024-12-14 17:27:19.842628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.214 17:27:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.214 17:27:20 -- common/autotest_common.sh@862 -- # return 0 00:24:24.214 17:27:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:24.214 17:27:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:24.214 17:27:20 -- common/autotest_common.sh@10 -- # set +x 00:24:24.214 17:27:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.214 17:27:20 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:24.214 17:27:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.214 17:27:20 -- common/autotest_common.sh@10 -- # set +x 00:24:24.214 [2024-12-14 17:27:20.636783] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aad0d0/0x1ab15a0) succeed. 00:24:24.214 [2024-12-14 17:27:20.645988] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aae670/0x1af2c40) succeed. 00:24:24.214 17:27:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.214 17:27:20 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:24.214 17:27:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.214 17:27:20 -- common/autotest_common.sh@10 -- # set +x 00:24:24.214 Malloc0 00:24:24.214 17:27:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.214 17:27:20 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:24.214 17:27:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.214 17:27:20 -- common/autotest_common.sh@10 -- # set +x 00:24:24.214 17:27:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.214 17:27:20 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:24.214 17:27:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.214 17:27:20 -- common/autotest_common.sh@10 -- # set +x 00:24:24.214 17:27:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.214 17:27:20 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:24.214 17:27:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.214 17:27:20 -- common/autotest_common.sh@10 -- # set +x 00:24:24.214 [2024-12-14 17:27:20.816189] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:24.214 17:27:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.214 17:27:20 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:24.214 17:27:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.214 17:27:20 -- common/autotest_common.sh@10 -- # set +x 00:24:24.214 [2024-12-14 17:27:20.823817] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:24.214 [ 00:24:24.214 { 00:24:24.214 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:24.214 "subtype": "Discovery", 00:24:24.214 "listen_addresses": [], 00:24:24.214 "allow_any_host": true, 00:24:24.214 "hosts": [] 00:24:24.214 }, 00:24:24.214 { 00:24:24.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.214 "subtype": "NVMe", 00:24:24.214 "listen_addresses": [ 00:24:24.214 { 00:24:24.214 "transport": "RDMA", 00:24:24.214 "trtype": "RDMA", 00:24:24.214 "adrfam": "IPv4", 00:24:24.214 "traddr": "192.168.100.8", 00:24:24.214 "trsvcid": "4420" 00:24:24.214 } 00:24:24.214 ], 00:24:24.214 "allow_any_host": true, 00:24:24.214 "hosts": [], 00:24:24.215 "serial_number": "SPDK00000000000001", 00:24:24.215 "model_number": "SPDK bdev Controller", 00:24:24.215 "max_namespaces": 2, 00:24:24.215 "min_cntlid": 1, 00:24:24.215 "max_cntlid": 65519, 00:24:24.215 "namespaces": [ 00:24:24.215 { 00:24:24.215 "nsid": 1, 00:24:24.215 "bdev_name": "Malloc0", 00:24:24.215 "name": "Malloc0", 00:24:24.215 "nguid": "1D96D047151A46A2A2B8CA3915773A6B", 00:24:24.215 "uuid": "1d96d047-151a-46a2-a2b8-ca3915773a6b" 00:24:24.215 } 00:24:24.215 ] 00:24:24.215 } 00:24:24.215 ] 00:24:24.215 17:27:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.215 17:27:20 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:24.215 17:27:20 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:24.215 17:27:20 -- host/aer.sh@33 -- # aerpid=1442052 00:24:24.215 17:27:20 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:24.215 17:27:20 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:24.215 17:27:20 -- common/autotest_common.sh@1254 -- # local i=0 00:24:24.215 17:27:20 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:24.215 17:27:20 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:24:24.215 17:27:20 -- common/autotest_common.sh@1257 -- # i=1 00:24:24.215 17:27:20 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:24:24.478 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.478 17:27:20 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:24.478 17:27:20 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:24:24.478 17:27:20 -- common/autotest_common.sh@1257 -- # i=2 00:24:24.478 17:27:20 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:24:24.478 17:27:21 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:24.478 17:27:21 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:24.478 17:27:21 -- common/autotest_common.sh@1265 -- # return 0 00:24:24.478 17:27:21 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:24.478 17:27:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.478 17:27:21 -- common/autotest_common.sh@10 -- # set +x 00:24:24.478 Malloc1 00:24:24.478 17:27:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.478 17:27:21 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:24.478 17:27:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.478 17:27:21 -- common/autotest_common.sh@10 -- # set +x 00:24:24.478 17:27:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.478 17:27:21 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:24.478 17:27:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.478 17:27:21 -- common/autotest_common.sh@10 -- # set +x 00:24:24.478 [ 00:24:24.478 { 00:24:24.478 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:24.478 "subtype": "Discovery", 00:24:24.478 "listen_addresses": [], 00:24:24.478 "allow_any_host": true, 00:24:24.478 "hosts": [] 00:24:24.478 }, 00:24:24.478 { 00:24:24.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.478 "subtype": "NVMe", 00:24:24.478 "listen_addresses": [ 00:24:24.478 { 00:24:24.478 "transport": "RDMA", 00:24:24.478 "trtype": "RDMA", 00:24:24.478 "adrfam": "IPv4", 00:24:24.478 "traddr": "192.168.100.8", 00:24:24.478 "trsvcid": "4420" 00:24:24.478 } 00:24:24.478 ], 00:24:24.478 "allow_any_host": true, 00:24:24.478 "hosts": [], 00:24:24.478 "serial_number": "SPDK00000000000001", 00:24:24.478 "model_number": "SPDK bdev Controller", 00:24:24.478 "max_namespaces": 2, 00:24:24.478 "min_cntlid": 1, 00:24:24.478 "max_cntlid": 65519, 00:24:24.478 "namespaces": [ 00:24:24.478 { 00:24:24.478 "nsid": 1, 00:24:24.478 "bdev_name": "Malloc0", 00:24:24.478 "name": "Malloc0", 00:24:24.478 "nguid": "1D96D047151A46A2A2B8CA3915773A6B", 00:24:24.478 "uuid": "1d96d047-151a-46a2-a2b8-ca3915773a6b" 00:24:24.478 }, 00:24:24.478 { 00:24:24.478 "nsid": 2, 00:24:24.478 "bdev_name": "Malloc1", 00:24:24.478 "name": "Malloc1", 00:24:24.478 "nguid": "123A43C5087C45079046CEACC9DD885C", 00:24:24.478 "uuid": "123a43c5-087c-4507-9046-ceacc9dd885c" 00:24:24.478 } 00:24:24.478 ] 00:24:24.478 } 00:24:24.478 ] 00:24:24.478 17:27:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.478 17:27:21 -- host/aer.sh@43 -- # wait 1442052 00:24:24.478 Asynchronous Event Request test 00:24:24.478 Attaching to 192.168.100.8 00:24:24.478 Attached to 192.168.100.8 00:24:24.478 Registering asynchronous event callbacks... 00:24:24.478 Starting namespace attribute notice tests for all controllers... 00:24:24.478 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:24.478 aer_cb - Changed Namespace 00:24:24.478 Cleaning up... 00:24:24.478 17:27:21 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:24.478 17:27:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.478 17:27:21 -- common/autotest_common.sh@10 -- # set +x 00:24:24.839 17:27:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.839 17:27:21 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:24.839 17:27:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.839 17:27:21 -- common/autotest_common.sh@10 -- # set +x 00:24:24.839 17:27:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.839 17:27:21 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:24.839 17:27:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.839 17:27:21 -- common/autotest_common.sh@10 -- # set +x 00:24:24.839 17:27:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.839 17:27:21 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:24.839 17:27:21 -- host/aer.sh@51 -- # nvmftestfini 00:24:24.839 17:27:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:24.839 17:27:21 -- nvmf/common.sh@116 -- # sync 00:24:24.839 17:27:21 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:24.839 17:27:21 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:24.839 17:27:21 -- nvmf/common.sh@119 -- # set +e 00:24:24.839 17:27:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:24.839 17:27:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:24.839 rmmod nvme_rdma 00:24:24.839 rmmod nvme_fabrics 00:24:24.839 17:27:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:24.839 17:27:21 -- nvmf/common.sh@123 -- # set -e 00:24:24.839 17:27:21 -- nvmf/common.sh@124 -- # return 0 00:24:24.839 17:27:21 -- nvmf/common.sh@477 -- # '[' -n 1441766 ']' 00:24:24.839 17:27:21 -- nvmf/common.sh@478 -- # killprocess 1441766 00:24:24.839 17:27:21 -- common/autotest_common.sh@936 -- # '[' -z 1441766 ']' 00:24:24.839 17:27:21 -- common/autotest_common.sh@940 -- # kill -0 1441766 00:24:24.839 17:27:21 -- common/autotest_common.sh@941 -- # uname 00:24:24.839 17:27:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:24.839 17:27:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1441766 00:24:24.839 17:27:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:24.839 17:27:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:24.839 17:27:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1441766' 00:24:24.839 killing process with pid 1441766 00:24:24.839 17:27:21 -- common/autotest_common.sh@955 -- # kill 1441766 00:24:24.839 [2024-12-14 17:27:21.331503] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:24.839 17:27:21 -- common/autotest_common.sh@960 -- # wait 1441766 00:24:25.099 17:27:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:25.099 17:27:21 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:25.099 00:24:25.099 real 0m8.897s 00:24:25.099 user 0m8.845s 00:24:25.099 sys 0m5.711s 00:24:25.099 17:27:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:25.099 17:27:21 -- common/autotest_common.sh@10 -- # set +x 00:24:25.099 ************************************ 00:24:25.099 END TEST nvmf_aer 00:24:25.099 ************************************ 00:24:25.099 17:27:21 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:25.099 17:27:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:25.099 17:27:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:25.099 17:27:21 -- common/autotest_common.sh@10 -- # set +x 00:24:25.099 ************************************ 00:24:25.099 START TEST nvmf_async_init 00:24:25.099 ************************************ 00:24:25.099 17:27:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:25.099 * Looking for test storage... 00:24:25.099 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:25.099 17:27:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:25.099 17:27:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:25.099 17:27:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:25.099 17:27:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:25.099 17:27:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:25.099 17:27:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:25.099 17:27:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:25.099 17:27:21 -- scripts/common.sh@335 -- # IFS=.-: 00:24:25.099 17:27:21 -- scripts/common.sh@335 -- # read -ra ver1 00:24:25.099 17:27:21 -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.099 17:27:21 -- scripts/common.sh@336 -- # read -ra ver2 00:24:25.099 17:27:21 -- scripts/common.sh@337 -- # local 'op=<' 00:24:25.099 17:27:21 -- scripts/common.sh@339 -- # ver1_l=2 00:24:25.099 17:27:21 -- scripts/common.sh@340 -- # ver2_l=1 00:24:25.099 17:27:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:25.099 17:27:21 -- scripts/common.sh@343 -- # case "$op" in 00:24:25.099 17:27:21 -- scripts/common.sh@344 -- # : 1 00:24:25.359 17:27:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:25.359 17:27:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.359 17:27:21 -- scripts/common.sh@364 -- # decimal 1 00:24:25.359 17:27:21 -- scripts/common.sh@352 -- # local d=1 00:24:25.359 17:27:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.359 17:27:21 -- scripts/common.sh@354 -- # echo 1 00:24:25.359 17:27:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:25.359 17:27:21 -- scripts/common.sh@365 -- # decimal 2 00:24:25.359 17:27:21 -- scripts/common.sh@352 -- # local d=2 00:24:25.359 17:27:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.359 17:27:21 -- scripts/common.sh@354 -- # echo 2 00:24:25.359 17:27:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:25.359 17:27:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:25.359 17:27:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:25.359 17:27:21 -- scripts/common.sh@367 -- # return 0 00:24:25.359 17:27:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.359 17:27:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:25.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.359 --rc genhtml_branch_coverage=1 00:24:25.359 --rc genhtml_function_coverage=1 00:24:25.359 --rc genhtml_legend=1 00:24:25.359 --rc geninfo_all_blocks=1 00:24:25.359 --rc geninfo_unexecuted_blocks=1 00:24:25.359 00:24:25.359 ' 00:24:25.359 17:27:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:25.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.359 --rc genhtml_branch_coverage=1 00:24:25.359 --rc genhtml_function_coverage=1 00:24:25.359 --rc genhtml_legend=1 00:24:25.359 --rc geninfo_all_blocks=1 00:24:25.359 --rc geninfo_unexecuted_blocks=1 00:24:25.359 00:24:25.359 ' 00:24:25.359 17:27:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:25.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.359 --rc genhtml_branch_coverage=1 00:24:25.359 --rc genhtml_function_coverage=1 00:24:25.359 --rc genhtml_legend=1 00:24:25.359 --rc geninfo_all_blocks=1 00:24:25.359 --rc geninfo_unexecuted_blocks=1 00:24:25.359 00:24:25.359 ' 00:24:25.359 17:27:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:25.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.359 --rc genhtml_branch_coverage=1 00:24:25.359 --rc genhtml_function_coverage=1 00:24:25.359 --rc genhtml_legend=1 00:24:25.359 --rc geninfo_all_blocks=1 00:24:25.359 --rc geninfo_unexecuted_blocks=1 00:24:25.359 00:24:25.359 ' 00:24:25.359 17:27:21 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.359 17:27:21 -- nvmf/common.sh@7 -- # uname -s 00:24:25.359 17:27:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.359 17:27:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.359 17:27:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.359 17:27:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.359 17:27:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.359 17:27:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.359 17:27:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.359 17:27:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.359 17:27:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.359 17:27:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.359 17:27:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:25.359 17:27:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:25.359 17:27:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.359 17:27:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.359 17:27:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.359 17:27:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:25.359 17:27:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.359 17:27:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.359 17:27:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.359 17:27:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.359 17:27:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.359 17:27:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.359 17:27:21 -- paths/export.sh@5 -- # export PATH 00:24:25.359 17:27:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.359 17:27:21 -- nvmf/common.sh@46 -- # : 0 00:24:25.359 17:27:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:25.359 17:27:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:25.359 17:27:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:25.359 17:27:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.359 17:27:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.359 17:27:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:25.359 17:27:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:25.359 17:27:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:25.359 17:27:21 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:25.359 17:27:21 -- host/async_init.sh@14 -- # null_block_size=512 00:24:25.359 17:27:21 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:25.359 17:27:21 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:25.359 17:27:21 -- host/async_init.sh@20 -- # uuidgen 00:24:25.359 17:27:21 -- host/async_init.sh@20 -- # tr -d - 00:24:25.359 17:27:21 -- host/async_init.sh@20 -- # nguid=03bdabda4f9b4e479a28b460e12417c1 00:24:25.359 17:27:21 -- host/async_init.sh@22 -- # nvmftestinit 00:24:25.359 17:27:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:25.359 17:27:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.359 17:27:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:25.359 17:27:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:25.359 17:27:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:25.359 17:27:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.359 17:27:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.359 17:27:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.359 17:27:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:25.359 17:27:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:25.359 17:27:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:25.359 17:27:21 -- common/autotest_common.sh@10 -- # set +x 00:24:31.934 17:27:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:31.934 17:27:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:31.934 17:27:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:31.934 17:27:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:31.934 17:27:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:31.934 17:27:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:31.934 17:27:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:31.934 17:27:28 -- nvmf/common.sh@294 -- # net_devs=() 00:24:31.934 17:27:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:31.934 17:27:28 -- nvmf/common.sh@295 -- # e810=() 00:24:31.934 17:27:28 -- nvmf/common.sh@295 -- # local -ga e810 00:24:31.934 17:27:28 -- nvmf/common.sh@296 -- # x722=() 00:24:31.934 17:27:28 -- nvmf/common.sh@296 -- # local -ga x722 00:24:31.934 17:27:28 -- nvmf/common.sh@297 -- # mlx=() 00:24:31.934 17:27:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:31.934 17:27:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.934 17:27:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.934 17:27:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.934 17:27:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.934 17:27:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.934 17:27:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.934 17:27:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.934 17:27:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.934 17:27:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.934 17:27:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.934 17:27:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.934 17:27:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:31.934 17:27:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:31.934 17:27:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:31.934 17:27:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:31.934 17:27:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:31.934 17:27:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:31.934 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:31.934 17:27:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:31.934 17:27:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:31.934 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:31.934 17:27:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:31.934 17:27:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:31.934 17:27:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.934 17:27:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:31.934 17:27:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.934 17:27:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:31.934 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:31.934 17:27:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.934 17:27:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.934 17:27:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:31.934 17:27:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.934 17:27:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:31.934 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:31.934 17:27:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.934 17:27:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:31.934 17:27:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:31.934 17:27:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:31.934 17:27:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:31.934 17:27:28 -- nvmf/common.sh@57 -- # uname 00:24:31.934 17:27:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:31.934 17:27:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:31.934 17:27:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:31.934 17:27:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:31.934 17:27:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:31.934 17:27:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:31.934 17:27:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:31.934 17:27:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:31.934 17:27:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:31.934 17:27:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:31.934 17:27:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:31.934 17:27:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:31.934 17:27:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:31.934 17:27:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:31.934 17:27:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:31.934 17:27:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:31.934 17:27:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:31.934 17:27:28 -- nvmf/common.sh@104 -- # continue 2 00:24:31.934 17:27:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:31.934 17:27:28 -- nvmf/common.sh@104 -- # continue 2 00:24:31.934 17:27:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:31.934 17:27:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:31.934 17:27:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:31.934 17:27:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:31.934 17:27:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:31.934 17:27:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:31.934 17:27:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:31.934 17:27:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:31.934 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:31.934 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:31.934 altname enp217s0f0np0 00:24:31.934 altname ens818f0np0 00:24:31.934 inet 192.168.100.8/24 scope global mlx_0_0 00:24:31.934 valid_lft forever preferred_lft forever 00:24:31.934 17:27:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:31.934 17:27:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:31.934 17:27:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:31.934 17:27:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:31.934 17:27:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:31.934 17:27:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:31.934 17:27:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:31.934 17:27:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:31.934 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:31.934 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:31.934 altname enp217s0f1np1 00:24:31.934 altname ens818f1np1 00:24:31.934 inet 192.168.100.9/24 scope global mlx_0_1 00:24:31.934 valid_lft forever preferred_lft forever 00:24:31.934 17:27:28 -- nvmf/common.sh@410 -- # return 0 00:24:31.934 17:27:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:31.934 17:27:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:31.934 17:27:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:31.934 17:27:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:31.934 17:27:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:31.934 17:27:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:31.934 17:27:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:31.934 17:27:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:31.934 17:27:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:31.934 17:27:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:31.934 17:27:28 -- nvmf/common.sh@104 -- # continue 2 00:24:31.934 17:27:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:31.934 17:27:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:31.934 17:27:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:31.935 17:27:28 -- nvmf/common.sh@104 -- # continue 2 00:24:31.935 17:27:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:31.935 17:27:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:31.935 17:27:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:31.935 17:27:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:31.935 17:27:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:31.935 17:27:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:31.935 17:27:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:31.935 17:27:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:31.935 17:27:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:31.935 17:27:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:31.935 17:27:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:31.935 17:27:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:31.935 17:27:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:31.935 192.168.100.9' 00:24:31.935 17:27:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:31.935 192.168.100.9' 00:24:31.935 17:27:28 -- nvmf/common.sh@445 -- # head -n 1 00:24:31.935 17:27:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:32.194 17:27:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:32.194 192.168.100.9' 00:24:32.194 17:27:28 -- nvmf/common.sh@446 -- # tail -n +2 00:24:32.194 17:27:28 -- nvmf/common.sh@446 -- # head -n 1 00:24:32.194 17:27:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:32.194 17:27:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:32.194 17:27:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:32.194 17:27:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:32.194 17:27:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:32.194 17:27:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:32.194 17:27:28 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:32.194 17:27:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:32.194 17:27:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:32.194 17:27:28 -- common/autotest_common.sh@10 -- # set +x 00:24:32.194 17:27:28 -- nvmf/common.sh@469 -- # nvmfpid=1445464 00:24:32.194 17:27:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:32.194 17:27:28 -- nvmf/common.sh@470 -- # waitforlisten 1445464 00:24:32.194 17:27:28 -- common/autotest_common.sh@829 -- # '[' -z 1445464 ']' 00:24:32.194 17:27:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.194 17:27:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:32.194 17:27:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.194 17:27:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:32.194 17:27:28 -- common/autotest_common.sh@10 -- # set +x 00:24:32.194 [2024-12-14 17:27:28.704974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:32.194 [2024-12-14 17:27:28.705024] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.194 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.194 [2024-12-14 17:27:28.775728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.194 [2024-12-14 17:27:28.811688] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:32.194 [2024-12-14 17:27:28.811820] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.194 [2024-12-14 17:27:28.811829] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.194 [2024-12-14 17:27:28.811838] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.194 [2024-12-14 17:27:28.811865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.132 17:27:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:33.133 17:27:29 -- common/autotest_common.sh@862 -- # return 0 00:24:33.133 17:27:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:33.133 17:27:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:33.133 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.133 17:27:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.133 17:27:29 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:33.133 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.133 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.133 [2024-12-14 17:27:29.581401] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x550230/0x5546e0) succeed. 00:24:33.133 [2024-12-14 17:27:29.590469] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5516e0/0x595d80) succeed. 00:24:33.133 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.133 17:27:29 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:33.133 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.133 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.133 null0 00:24:33.133 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.133 17:27:29 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:33.133 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.133 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.133 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.133 17:27:29 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:33.133 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.133 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.133 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.133 17:27:29 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 03bdabda4f9b4e479a28b460e12417c1 00:24:33.133 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.133 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.133 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.133 17:27:29 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:33.133 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.133 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.133 [2024-12-14 17:27:29.673472] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:33.133 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.133 17:27:29 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:33.133 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.133 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.133 nvme0n1 00:24:33.133 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.133 17:27:29 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:33.133 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.133 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.133 [ 00:24:33.133 { 00:24:33.133 "name": "nvme0n1", 00:24:33.133 "aliases": [ 00:24:33.133 "03bdabda-4f9b-4e47-9a28-b460e12417c1" 00:24:33.133 ], 00:24:33.133 "product_name": "NVMe disk", 00:24:33.133 "block_size": 512, 00:24:33.133 "num_blocks": 2097152, 00:24:33.133 "uuid": "03bdabda-4f9b-4e47-9a28-b460e12417c1", 00:24:33.133 "assigned_rate_limits": { 00:24:33.133 "rw_ios_per_sec": 0, 00:24:33.133 "rw_mbytes_per_sec": 0, 00:24:33.133 "r_mbytes_per_sec": 0, 00:24:33.133 "w_mbytes_per_sec": 0 00:24:33.133 }, 00:24:33.133 "claimed": false, 00:24:33.133 "zoned": false, 00:24:33.133 "supported_io_types": { 00:24:33.133 "read": true, 00:24:33.133 "write": true, 00:24:33.133 "unmap": false, 00:24:33.133 "write_zeroes": true, 00:24:33.133 "flush": true, 00:24:33.133 "reset": true, 00:24:33.133 "compare": true, 00:24:33.133 "compare_and_write": true, 00:24:33.133 "abort": true, 00:24:33.133 "nvme_admin": true, 00:24:33.133 "nvme_io": true 00:24:33.133 }, 00:24:33.133 "memory_domains": [ 00:24:33.133 { 00:24:33.133 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:33.133 "dma_device_type": 0 00:24:33.133 } 00:24:33.133 ], 00:24:33.133 "driver_specific": { 00:24:33.133 "nvme": [ 00:24:33.133 { 00:24:33.133 "trid": { 00:24:33.133 "trtype": "RDMA", 00:24:33.133 "adrfam": "IPv4", 00:24:33.133 "traddr": "192.168.100.8", 00:24:33.133 "trsvcid": "4420", 00:24:33.133 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:33.133 }, 00:24:33.133 "ctrlr_data": { 00:24:33.133 "cntlid": 1, 00:24:33.133 "vendor_id": "0x8086", 00:24:33.133 "model_number": "SPDK bdev Controller", 00:24:33.133 "serial_number": "00000000000000000000", 00:24:33.133 "firmware_revision": "24.01.1", 00:24:33.133 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:33.133 "oacs": { 00:24:33.133 "security": 0, 00:24:33.133 "format": 0, 00:24:33.133 "firmware": 0, 00:24:33.133 "ns_manage": 0 00:24:33.133 }, 00:24:33.133 "multi_ctrlr": true, 00:24:33.133 "ana_reporting": false 00:24:33.133 }, 00:24:33.133 "vs": { 00:24:33.133 "nvme_version": "1.3" 00:24:33.133 }, 00:24:33.133 "ns_data": { 00:24:33.133 "id": 1, 00:24:33.133 "can_share": true 00:24:33.133 } 00:24:33.133 } 00:24:33.133 ], 00:24:33.133 "mp_policy": "active_passive" 00:24:33.133 } 00:24:33.133 } 00:24:33.133 ] 00:24:33.133 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.133 17:27:29 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:33.133 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.133 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.133 [2024-12-14 17:27:29.779431] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:33.133 [2024-12-14 17:27:29.797210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:33.392 [2024-12-14 17:27:29.818782] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:33.392 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.392 17:27:29 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:33.392 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.392 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.392 [ 00:24:33.392 { 00:24:33.392 "name": "nvme0n1", 00:24:33.392 "aliases": [ 00:24:33.392 "03bdabda-4f9b-4e47-9a28-b460e12417c1" 00:24:33.392 ], 00:24:33.392 "product_name": "NVMe disk", 00:24:33.392 "block_size": 512, 00:24:33.392 "num_blocks": 2097152, 00:24:33.392 "uuid": "03bdabda-4f9b-4e47-9a28-b460e12417c1", 00:24:33.392 "assigned_rate_limits": { 00:24:33.392 "rw_ios_per_sec": 0, 00:24:33.392 "rw_mbytes_per_sec": 0, 00:24:33.392 "r_mbytes_per_sec": 0, 00:24:33.392 "w_mbytes_per_sec": 0 00:24:33.392 }, 00:24:33.392 "claimed": false, 00:24:33.392 "zoned": false, 00:24:33.392 "supported_io_types": { 00:24:33.392 "read": true, 00:24:33.392 "write": true, 00:24:33.392 "unmap": false, 00:24:33.392 "write_zeroes": true, 00:24:33.392 "flush": true, 00:24:33.392 "reset": true, 00:24:33.392 "compare": true, 00:24:33.392 "compare_and_write": true, 00:24:33.392 "abort": true, 00:24:33.392 "nvme_admin": true, 00:24:33.392 "nvme_io": true 00:24:33.392 }, 00:24:33.392 "memory_domains": [ 00:24:33.392 { 00:24:33.392 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:33.392 "dma_device_type": 0 00:24:33.392 } 00:24:33.392 ], 00:24:33.392 "driver_specific": { 00:24:33.392 "nvme": [ 00:24:33.392 { 00:24:33.392 "trid": { 00:24:33.392 "trtype": "RDMA", 00:24:33.392 "adrfam": "IPv4", 00:24:33.392 "traddr": "192.168.100.8", 00:24:33.392 "trsvcid": "4420", 00:24:33.392 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:33.392 }, 00:24:33.392 "ctrlr_data": { 00:24:33.392 "cntlid": 2, 00:24:33.392 "vendor_id": "0x8086", 00:24:33.392 "model_number": "SPDK bdev Controller", 00:24:33.392 "serial_number": "00000000000000000000", 00:24:33.392 "firmware_revision": "24.01.1", 00:24:33.392 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:33.392 "oacs": { 00:24:33.392 "security": 0, 00:24:33.392 "format": 0, 00:24:33.392 "firmware": 0, 00:24:33.392 "ns_manage": 0 00:24:33.392 }, 00:24:33.392 "multi_ctrlr": true, 00:24:33.392 "ana_reporting": false 00:24:33.392 }, 00:24:33.392 "vs": { 00:24:33.392 "nvme_version": "1.3" 00:24:33.392 }, 00:24:33.392 "ns_data": { 00:24:33.392 "id": 1, 00:24:33.392 "can_share": true 00:24:33.392 } 00:24:33.392 } 00:24:33.392 ], 00:24:33.392 "mp_policy": "active_passive" 00:24:33.392 } 00:24:33.392 } 00:24:33.392 ] 00:24:33.392 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.392 17:27:29 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.392 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.392 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.392 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.392 17:27:29 -- host/async_init.sh@53 -- # mktemp 00:24:33.392 17:27:29 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.9qMym0B9ZP 00:24:33.392 17:27:29 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:33.392 17:27:29 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.9qMym0B9ZP 00:24:33.392 17:27:29 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:33.392 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.392 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.392 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.392 17:27:29 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:24:33.392 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.392 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.392 [2024-12-14 17:27:29.898077] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:33.392 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.392 17:27:29 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9qMym0B9ZP 00:24:33.392 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.392 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.392 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.392 17:27:29 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9qMym0B9ZP 00:24:33.392 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.392 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.392 [2024-12-14 17:27:29.914097] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.392 nvme0n1 00:24:33.392 17:27:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.392 17:27:29 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:33.392 17:27:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.392 17:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:33.392 [ 00:24:33.392 { 00:24:33.392 "name": "nvme0n1", 00:24:33.392 "aliases": [ 00:24:33.392 "03bdabda-4f9b-4e47-9a28-b460e12417c1" 00:24:33.392 ], 00:24:33.392 "product_name": "NVMe disk", 00:24:33.392 "block_size": 512, 00:24:33.392 "num_blocks": 2097152, 00:24:33.392 "uuid": "03bdabda-4f9b-4e47-9a28-b460e12417c1", 00:24:33.392 "assigned_rate_limits": { 00:24:33.392 "rw_ios_per_sec": 0, 00:24:33.392 "rw_mbytes_per_sec": 0, 00:24:33.392 "r_mbytes_per_sec": 0, 00:24:33.392 "w_mbytes_per_sec": 0 00:24:33.392 }, 00:24:33.392 "claimed": false, 00:24:33.392 "zoned": false, 00:24:33.393 "supported_io_types": { 00:24:33.393 "read": true, 00:24:33.393 "write": true, 00:24:33.393 "unmap": false, 00:24:33.393 "write_zeroes": true, 00:24:33.393 "flush": true, 00:24:33.393 "reset": true, 00:24:33.393 "compare": true, 00:24:33.393 "compare_and_write": true, 00:24:33.393 "abort": true, 00:24:33.393 "nvme_admin": true, 00:24:33.393 "nvme_io": true 00:24:33.393 }, 00:24:33.393 "memory_domains": [ 00:24:33.393 { 00:24:33.393 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:33.393 "dma_device_type": 0 00:24:33.393 } 00:24:33.393 ], 00:24:33.393 "driver_specific": { 00:24:33.393 "nvme": [ 00:24:33.393 { 00:24:33.393 "trid": { 00:24:33.393 "trtype": "RDMA", 00:24:33.393 "adrfam": "IPv4", 00:24:33.393 "traddr": "192.168.100.8", 00:24:33.393 "trsvcid": "4421", 00:24:33.393 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:33.393 }, 00:24:33.393 "ctrlr_data": { 00:24:33.393 "cntlid": 3, 00:24:33.393 "vendor_id": "0x8086", 00:24:33.393 "model_number": "SPDK bdev Controller", 00:24:33.393 "serial_number": "00000000000000000000", 00:24:33.393 "firmware_revision": "24.01.1", 00:24:33.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:33.393 "oacs": { 00:24:33.393 "security": 0, 00:24:33.393 "format": 0, 00:24:33.393 "firmware": 0, 00:24:33.393 "ns_manage": 0 00:24:33.393 }, 00:24:33.393 "multi_ctrlr": true, 00:24:33.393 "ana_reporting": false 00:24:33.393 }, 00:24:33.393 "vs": { 00:24:33.393 "nvme_version": "1.3" 00:24:33.393 }, 00:24:33.393 "ns_data": { 00:24:33.393 "id": 1, 00:24:33.393 "can_share": true 00:24:33.393 } 00:24:33.393 } 00:24:33.393 ], 00:24:33.393 "mp_policy": "active_passive" 00:24:33.393 } 00:24:33.393 } 00:24:33.393 ] 00:24:33.393 17:27:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.393 17:27:30 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.393 17:27:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.393 17:27:30 -- common/autotest_common.sh@10 -- # set +x 00:24:33.393 17:27:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.393 17:27:30 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.9qMym0B9ZP 00:24:33.393 17:27:30 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:33.393 17:27:30 -- host/async_init.sh@78 -- # nvmftestfini 00:24:33.393 17:27:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:33.393 17:27:30 -- nvmf/common.sh@116 -- # sync 00:24:33.393 17:27:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:33.393 17:27:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:33.393 17:27:30 -- nvmf/common.sh@119 -- # set +e 00:24:33.393 17:27:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:33.393 17:27:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:33.393 rmmod nvme_rdma 00:24:33.393 rmmod nvme_fabrics 00:24:33.652 17:27:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:33.652 17:27:30 -- nvmf/common.sh@123 -- # set -e 00:24:33.652 17:27:30 -- nvmf/common.sh@124 -- # return 0 00:24:33.652 17:27:30 -- nvmf/common.sh@477 -- # '[' -n 1445464 ']' 00:24:33.652 17:27:30 -- nvmf/common.sh@478 -- # killprocess 1445464 00:24:33.652 17:27:30 -- common/autotest_common.sh@936 -- # '[' -z 1445464 ']' 00:24:33.652 17:27:30 -- common/autotest_common.sh@940 -- # kill -0 1445464 00:24:33.652 17:27:30 -- common/autotest_common.sh@941 -- # uname 00:24:33.652 17:27:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:33.652 17:27:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1445464 00:24:33.652 17:27:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:33.652 17:27:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:33.652 17:27:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1445464' 00:24:33.652 killing process with pid 1445464 00:24:33.652 17:27:30 -- common/autotest_common.sh@955 -- # kill 1445464 00:24:33.652 17:27:30 -- common/autotest_common.sh@960 -- # wait 1445464 00:24:33.911 17:27:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:33.911 17:27:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:33.911 00:24:33.911 real 0m8.739s 00:24:33.911 user 0m3.874s 00:24:33.911 sys 0m5.618s 00:24:33.911 17:27:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:33.911 17:27:30 -- common/autotest_common.sh@10 -- # set +x 00:24:33.911 ************************************ 00:24:33.911 END TEST nvmf_async_init 00:24:33.911 ************************************ 00:24:33.911 17:27:30 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:33.911 17:27:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:33.911 17:27:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:33.911 17:27:30 -- common/autotest_common.sh@10 -- # set +x 00:24:33.911 ************************************ 00:24:33.911 START TEST dma 00:24:33.911 ************************************ 00:24:33.911 17:27:30 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:33.911 * Looking for test storage... 00:24:33.911 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:33.911 17:27:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:33.911 17:27:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:33.911 17:27:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:33.911 17:27:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:33.911 17:27:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:33.911 17:27:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:33.911 17:27:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:33.911 17:27:30 -- scripts/common.sh@335 -- # IFS=.-: 00:24:33.911 17:27:30 -- scripts/common.sh@335 -- # read -ra ver1 00:24:33.911 17:27:30 -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.911 17:27:30 -- scripts/common.sh@336 -- # read -ra ver2 00:24:33.911 17:27:30 -- scripts/common.sh@337 -- # local 'op=<' 00:24:33.911 17:27:30 -- scripts/common.sh@339 -- # ver1_l=2 00:24:33.911 17:27:30 -- scripts/common.sh@340 -- # ver2_l=1 00:24:33.911 17:27:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:33.911 17:27:30 -- scripts/common.sh@343 -- # case "$op" in 00:24:33.911 17:27:30 -- scripts/common.sh@344 -- # : 1 00:24:33.911 17:27:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:33.911 17:27:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.911 17:27:30 -- scripts/common.sh@364 -- # decimal 1 00:24:33.911 17:27:30 -- scripts/common.sh@352 -- # local d=1 00:24:33.911 17:27:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.911 17:27:30 -- scripts/common.sh@354 -- # echo 1 00:24:33.911 17:27:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:33.911 17:27:30 -- scripts/common.sh@365 -- # decimal 2 00:24:33.911 17:27:30 -- scripts/common.sh@352 -- # local d=2 00:24:33.911 17:27:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:33.911 17:27:30 -- scripts/common.sh@354 -- # echo 2 00:24:33.911 17:27:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:33.911 17:27:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:33.911 17:27:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:33.911 17:27:30 -- scripts/common.sh@367 -- # return 0 00:24:33.911 17:27:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:33.911 17:27:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:33.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.911 --rc genhtml_branch_coverage=1 00:24:33.911 --rc genhtml_function_coverage=1 00:24:33.911 --rc genhtml_legend=1 00:24:33.911 --rc geninfo_all_blocks=1 00:24:33.911 --rc geninfo_unexecuted_blocks=1 00:24:33.912 00:24:33.912 ' 00:24:33.912 17:27:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:33.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.912 --rc genhtml_branch_coverage=1 00:24:33.912 --rc genhtml_function_coverage=1 00:24:33.912 --rc genhtml_legend=1 00:24:33.912 --rc geninfo_all_blocks=1 00:24:33.912 --rc geninfo_unexecuted_blocks=1 00:24:33.912 00:24:33.912 ' 00:24:34.171 17:27:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:34.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.171 --rc genhtml_branch_coverage=1 00:24:34.171 --rc genhtml_function_coverage=1 00:24:34.171 --rc genhtml_legend=1 00:24:34.171 --rc geninfo_all_blocks=1 00:24:34.171 --rc geninfo_unexecuted_blocks=1 00:24:34.171 00:24:34.171 ' 00:24:34.171 17:27:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:34.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.171 --rc genhtml_branch_coverage=1 00:24:34.171 --rc genhtml_function_coverage=1 00:24:34.171 --rc genhtml_legend=1 00:24:34.171 --rc geninfo_all_blocks=1 00:24:34.171 --rc geninfo_unexecuted_blocks=1 00:24:34.171 00:24:34.171 ' 00:24:34.171 17:27:30 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.171 17:27:30 -- nvmf/common.sh@7 -- # uname -s 00:24:34.171 17:27:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.171 17:27:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.171 17:27:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.171 17:27:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.171 17:27:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.171 17:27:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.171 17:27:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.171 17:27:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.171 17:27:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.171 17:27:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.171 17:27:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:34.171 17:27:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:34.171 17:27:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.171 17:27:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.171 17:27:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.172 17:27:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:34.172 17:27:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.172 17:27:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.172 17:27:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.172 17:27:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.172 17:27:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.172 17:27:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.172 17:27:30 -- paths/export.sh@5 -- # export PATH 00:24:34.172 17:27:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.172 17:27:30 -- nvmf/common.sh@46 -- # : 0 00:24:34.172 17:27:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:34.172 17:27:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:34.172 17:27:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:34.172 17:27:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.172 17:27:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.172 17:27:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:34.172 17:27:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:34.172 17:27:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:34.172 17:27:30 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:24:34.172 17:27:30 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:24:34.172 17:27:30 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:24:34.172 17:27:30 -- host/dma.sh@18 -- # subsystem=0 00:24:34.172 17:27:30 -- host/dma.sh@93 -- # nvmftestinit 00:24:34.172 17:27:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:34.172 17:27:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.172 17:27:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:34.172 17:27:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:34.172 17:27:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:34.172 17:27:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.172 17:27:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.172 17:27:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.172 17:27:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:34.172 17:27:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:34.172 17:27:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:34.172 17:27:30 -- common/autotest_common.sh@10 -- # set +x 00:24:40.745 17:27:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:40.745 17:27:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:40.746 17:27:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:40.746 17:27:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:40.746 17:27:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:40.746 17:27:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:40.746 17:27:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:40.746 17:27:37 -- nvmf/common.sh@294 -- # net_devs=() 00:24:40.746 17:27:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:40.746 17:27:37 -- nvmf/common.sh@295 -- # e810=() 00:24:40.746 17:27:37 -- nvmf/common.sh@295 -- # local -ga e810 00:24:40.746 17:27:37 -- nvmf/common.sh@296 -- # x722=() 00:24:40.746 17:27:37 -- nvmf/common.sh@296 -- # local -ga x722 00:24:40.746 17:27:37 -- nvmf/common.sh@297 -- # mlx=() 00:24:40.746 17:27:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:40.746 17:27:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.746 17:27:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.746 17:27:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.746 17:27:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.746 17:27:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.746 17:27:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.746 17:27:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.746 17:27:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.746 17:27:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.746 17:27:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.746 17:27:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.746 17:27:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:40.746 17:27:37 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:40.746 17:27:37 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:40.746 17:27:37 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:40.746 17:27:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:40.746 17:27:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:40.746 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:40.746 17:27:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:40.746 17:27:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:40.746 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:40.746 17:27:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:40.746 17:27:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:40.746 17:27:37 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.746 17:27:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.746 17:27:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.746 17:27:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:40.746 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:40.746 17:27:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.746 17:27:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.746 17:27:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.746 17:27:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.746 17:27:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:40.746 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:40.746 17:27:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.746 17:27:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:40.746 17:27:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:40.746 17:27:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:40.746 17:27:37 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:40.746 17:27:37 -- nvmf/common.sh@57 -- # uname 00:24:40.746 17:27:37 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:40.746 17:27:37 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:40.746 17:27:37 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:40.746 17:27:37 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:40.746 17:27:37 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:40.746 17:27:37 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:40.746 17:27:37 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:40.746 17:27:37 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:40.746 17:27:37 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:40.746 17:27:37 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:40.746 17:27:37 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:40.746 17:27:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:40.746 17:27:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:40.746 17:27:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:40.746 17:27:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:40.746 17:27:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:40.746 17:27:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:40.746 17:27:37 -- nvmf/common.sh@104 -- # continue 2 00:24:40.746 17:27:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:40.746 17:27:37 -- nvmf/common.sh@104 -- # continue 2 00:24:40.746 17:27:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:40.746 17:27:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:40.746 17:27:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:40.746 17:27:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:40.746 17:27:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:40.746 17:27:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:40.746 17:27:37 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:40.746 17:27:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:40.746 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:40.746 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:40.746 altname enp217s0f0np0 00:24:40.746 altname ens818f0np0 00:24:40.746 inet 192.168.100.8/24 scope global mlx_0_0 00:24:40.746 valid_lft forever preferred_lft forever 00:24:40.746 17:27:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:40.746 17:27:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:40.746 17:27:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:40.746 17:27:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:40.746 17:27:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:40.746 17:27:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:40.746 17:27:37 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:40.746 17:27:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:40.746 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:40.746 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:40.746 altname enp217s0f1np1 00:24:40.746 altname ens818f1np1 00:24:40.746 inet 192.168.100.9/24 scope global mlx_0_1 00:24:40.746 valid_lft forever preferred_lft forever 00:24:40.746 17:27:37 -- nvmf/common.sh@410 -- # return 0 00:24:40.746 17:27:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:40.746 17:27:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:40.746 17:27:37 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:40.746 17:27:37 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:40.746 17:27:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:40.746 17:27:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:40.746 17:27:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:40.746 17:27:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:40.746 17:27:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:40.746 17:27:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:40.746 17:27:37 -- nvmf/common.sh@104 -- # continue 2 00:24:40.746 17:27:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.746 17:27:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:40.746 17:27:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:40.746 17:27:37 -- nvmf/common.sh@104 -- # continue 2 00:24:40.746 17:27:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:40.746 17:27:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:40.746 17:27:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:40.746 17:27:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:40.746 17:27:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:40.746 17:27:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:40.746 17:27:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:40.746 17:27:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:40.746 17:27:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:40.747 17:27:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:40.747 17:27:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:40.747 17:27:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:40.747 17:27:37 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:40.747 192.168.100.9' 00:24:40.747 17:27:37 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:40.747 192.168.100.9' 00:24:40.747 17:27:37 -- nvmf/common.sh@445 -- # head -n 1 00:24:40.747 17:27:37 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:40.747 17:27:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:40.747 192.168.100.9' 00:24:40.747 17:27:37 -- nvmf/common.sh@446 -- # tail -n +2 00:24:40.747 17:27:37 -- nvmf/common.sh@446 -- # head -n 1 00:24:40.747 17:27:37 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:40.747 17:27:37 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:40.747 17:27:37 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:40.747 17:27:37 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:40.747 17:27:37 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:40.747 17:27:37 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:41.006 17:27:37 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:24:41.006 17:27:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:41.006 17:27:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:41.006 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:24:41.006 17:27:37 -- nvmf/common.sh@469 -- # nvmfpid=1448976 00:24:41.006 17:27:37 -- nvmf/common.sh@470 -- # waitforlisten 1448976 00:24:41.006 17:27:37 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:41.006 17:27:37 -- common/autotest_common.sh@829 -- # '[' -z 1448976 ']' 00:24:41.006 17:27:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.006 17:27:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.006 17:27:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.006 17:27:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.006 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:24:41.006 [2024-12-14 17:27:37.489539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:41.006 [2024-12-14 17:27:37.489593] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.006 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.006 [2024-12-14 17:27:37.560054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:41.006 [2024-12-14 17:27:37.598754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:41.006 [2024-12-14 17:27:37.598886] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.006 [2024-12-14 17:27:37.598896] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.006 [2024-12-14 17:27:37.598905] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.006 [2024-12-14 17:27:37.598956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.006 [2024-12-14 17:27:37.598958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.945 17:27:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.945 17:27:38 -- common/autotest_common.sh@862 -- # return 0 00:24:41.945 17:27:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:41.945 17:27:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:41.945 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:24:41.945 17:27:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.945 17:27:38 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:41.945 17:27:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.945 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:24:41.945 [2024-12-14 17:27:38.378711] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x91eb40/0x922ff0) succeed. 00:24:41.945 [2024-12-14 17:27:38.387756] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x91fff0/0x964690) succeed. 00:24:41.945 17:27:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.945 17:27:38 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:24:41.945 17:27:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.945 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:24:41.945 Malloc0 00:24:41.945 17:27:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.945 17:27:38 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:41.945 17:27:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.945 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:24:41.945 17:27:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.945 17:27:38 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:41.945 17:27:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.945 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:24:41.945 17:27:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.945 17:27:38 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:41.945 17:27:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.945 17:27:38 -- common/autotest_common.sh@10 -- # set +x 00:24:41.945 [2024-12-14 17:27:38.548720] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:41.945 17:27:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.945 17:27:38 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:24:41.945 17:27:38 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:24:41.945 17:27:38 -- nvmf/common.sh@520 -- # config=() 00:24:41.945 17:27:38 -- nvmf/common.sh@520 -- # local subsystem config 00:24:41.945 17:27:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:41.945 17:27:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:41.945 { 00:24:41.945 "params": { 00:24:41.945 "name": "Nvme$subsystem", 00:24:41.945 "trtype": "$TEST_TRANSPORT", 00:24:41.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:41.945 "adrfam": "ipv4", 00:24:41.945 "trsvcid": "$NVMF_PORT", 00:24:41.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:41.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:41.945 "hdgst": ${hdgst:-false}, 00:24:41.945 "ddgst": ${ddgst:-false} 00:24:41.945 }, 00:24:41.945 "method": "bdev_nvme_attach_controller" 00:24:41.945 } 00:24:41.945 EOF 00:24:41.945 )") 00:24:41.945 17:27:38 -- nvmf/common.sh@542 -- # cat 00:24:41.945 17:27:38 -- nvmf/common.sh@544 -- # jq . 00:24:41.945 17:27:38 -- nvmf/common.sh@545 -- # IFS=, 00:24:41.945 17:27:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:41.945 "params": { 00:24:41.945 "name": "Nvme0", 00:24:41.945 "trtype": "rdma", 00:24:41.945 "traddr": "192.168.100.8", 00:24:41.945 "adrfam": "ipv4", 00:24:41.945 "trsvcid": "4420", 00:24:41.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.945 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:41.945 "hdgst": false, 00:24:41.945 "ddgst": false 00:24:41.945 }, 00:24:41.945 "method": "bdev_nvme_attach_controller" 00:24:41.945 }' 00:24:41.945 [2024-12-14 17:27:38.595363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:41.945 [2024-12-14 17:27:38.595419] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449261 ] 00:24:41.945 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.205 [2024-12-14 17:27:38.662108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:42.205 [2024-12-14 17:27:38.698817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:42.205 [2024-12-14 17:27:38.698820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.479 bdev Nvme0n1 reports 1 memory domains 00:24:47.479 bdev Nvme0n1 supports RDMA memory domain 00:24:47.479 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:47.479 ========================================================================== 00:24:47.479 Latency [us] 00:24:47.479 IOPS MiB/s Average min max 00:24:47.479 Core 2: 21526.04 84.09 742.61 245.68 8649.14 00:24:47.479 Core 3: 22058.73 86.17 724.64 239.01 8676.94 00:24:47.479 ========================================================================== 00:24:47.479 Total : 43584.77 170.25 733.51 239.01 8676.94 00:24:47.479 00:24:47.479 Total operations: 217966, translate 217966 pull_push 0 memzero 0 00:24:47.479 17:27:44 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:24:47.479 17:27:44 -- host/dma.sh@107 -- # gen_malloc_json 00:24:47.479 17:27:44 -- host/dma.sh@21 -- # jq . 00:24:47.479 [2024-12-14 17:27:44.118171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:47.479 [2024-12-14 17:27:44.118225] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1450084 ] 00:24:47.479 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.739 [2024-12-14 17:27:44.185337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:47.739 [2024-12-14 17:27:44.219298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.739 [2024-12-14 17:27:44.219301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.015 bdev Malloc0 reports 1 memory domains 00:24:53.015 bdev Malloc0 doesn't support RDMA memory domain 00:24:53.015 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:53.015 ========================================================================== 00:24:53.015 Latency [us] 00:24:53.015 IOPS MiB/s Average min max 00:24:53.015 Core 2: 14918.35 58.27 1071.77 423.30 1433.75 00:24:53.015 Core 3: 15088.69 58.94 1059.66 417.58 1944.60 00:24:53.015 ========================================================================== 00:24:53.015 Total : 30007.04 117.21 1065.68 417.58 1944.60 00:24:53.015 00:24:53.015 Total operations: 150086, translate 0 pull_push 600344 memzero 0 00:24:53.015 17:27:49 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:24:53.015 17:27:49 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:24:53.015 17:27:49 -- host/dma.sh@48 -- # local subsystem=0 00:24:53.015 17:27:49 -- host/dma.sh@50 -- # jq . 00:24:53.015 Ignoring -M option 00:24:53.015 [2024-12-14 17:27:49.551467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:53.015 [2024-12-14 17:27:49.551531] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1451071 ] 00:24:53.015 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.015 [2024-12-14 17:27:49.618657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:53.015 [2024-12-14 17:27:49.655031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.015 [2024-12-14 17:27:49.655034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.274 [2024-12-14 17:27:49.858801] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:58.550 [2024-12-14 17:27:54.887331] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:58.550 bdev 4f505939-7298-4466-ae8e-6cba9dbdc72c reports 1 memory domains 00:24:58.550 bdev 4f505939-7298-4466-ae8e-6cba9dbdc72c supports RDMA memory domain 00:24:58.550 Initialization complete, running randread IO for 5 sec on 2 cores 00:24:58.550 ========================================================================== 00:24:58.550 Latency [us] 00:24:58.550 IOPS MiB/s Average min max 00:24:58.550 Core 2: 74433.44 290.76 214.11 85.96 3009.26 00:24:58.550 Core 3: 70558.47 275.62 225.84 64.15 2913.12 00:24:58.550 ========================================================================== 00:24:58.550 Total : 144991.91 566.37 219.82 64.15 3009.26 00:24:58.550 00:24:58.550 Total operations: 725039, translate 0 pull_push 0 memzero 725039 00:24:58.550 17:27:55 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:24:58.550 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.550 [2024-12-14 17:27:55.181983] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:01.085 Initializing NVMe Controllers 00:25:01.085 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:25:01.085 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:25:01.085 Initialization complete. Launching workers. 00:25:01.085 ======================================================== 00:25:01.085 Latency(us) 00:25:01.085 Device Information : IOPS MiB/s Average min max 00:25:01.085 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.19 5297.04 9666.26 00:25:01.085 ======================================================== 00:25:01.085 Total : 2016.00 7.88 7972.19 5297.04 9666.26 00:25:01.085 00:25:01.085 17:27:57 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:25:01.085 17:27:57 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:25:01.085 17:27:57 -- host/dma.sh@48 -- # local subsystem=0 00:25:01.085 17:27:57 -- host/dma.sh@50 -- # jq . 00:25:01.085 [2024-12-14 17:27:57.523875] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:01.085 [2024-12-14 17:27:57.523931] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1452393 ] 00:25:01.085 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.085 [2024-12-14 17:27:57.591015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:01.085 [2024-12-14 17:27:57.628044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:01.085 [2024-12-14 17:27:57.628047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.344 [2024-12-14 17:27:57.841503] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:25:06.621 [2024-12-14 17:28:02.871864] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:25:06.621 bdev 65de45a5-a7a2-4e44-9c57-f41111315d4b reports 1 memory domains 00:25:06.621 bdev 65de45a5-a7a2-4e44-9c57-f41111315d4b supports RDMA memory domain 00:25:06.621 Initialization complete, running randrw IO for 5 sec on 2 cores 00:25:06.621 ========================================================================== 00:25:06.621 Latency [us] 00:25:06.621 IOPS MiB/s Average min max 00:25:06.621 Core 2: 19202.88 75.01 832.56 15.88 8236.75 00:25:06.621 Core 3: 19535.99 76.31 818.32 13.64 8479.53 00:25:06.621 ========================================================================== 00:25:06.621 Total : 38738.87 151.32 825.38 13.64 8479.53 00:25:06.621 00:25:06.621 Total operations: 193750, translate 193643 pull_push 0 memzero 107 00:25:06.621 17:28:03 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:06.621 17:28:03 -- host/dma.sh@120 -- # nvmftestfini 00:25:06.621 17:28:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:06.621 17:28:03 -- nvmf/common.sh@116 -- # sync 00:25:06.621 17:28:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:06.621 17:28:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:06.621 17:28:03 -- nvmf/common.sh@119 -- # set +e 00:25:06.621 17:28:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:06.621 17:28:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:06.621 rmmod nvme_rdma 00:25:06.621 rmmod nvme_fabrics 00:25:06.621 17:28:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:06.621 17:28:03 -- nvmf/common.sh@123 -- # set -e 00:25:06.621 17:28:03 -- nvmf/common.sh@124 -- # return 0 00:25:06.621 17:28:03 -- nvmf/common.sh@477 -- # '[' -n 1448976 ']' 00:25:06.621 17:28:03 -- nvmf/common.sh@478 -- # killprocess 1448976 00:25:06.621 17:28:03 -- common/autotest_common.sh@936 -- # '[' -z 1448976 ']' 00:25:06.621 17:28:03 -- common/autotest_common.sh@940 -- # kill -0 1448976 00:25:06.621 17:28:03 -- common/autotest_common.sh@941 -- # uname 00:25:06.621 17:28:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:06.621 17:28:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1448976 00:25:06.621 17:28:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:06.621 17:28:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:06.621 17:28:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1448976' 00:25:06.621 killing process with pid 1448976 00:25:06.621 17:28:03 -- common/autotest_common.sh@955 -- # kill 1448976 00:25:06.621 17:28:03 -- common/autotest_common.sh@960 -- # wait 1448976 00:25:06.880 17:28:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:06.881 17:28:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:06.881 00:25:06.881 real 0m33.082s 00:25:06.881 user 1m36.229s 00:25:06.881 sys 0m6.424s 00:25:06.881 17:28:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:06.881 17:28:03 -- common/autotest_common.sh@10 -- # set +x 00:25:06.881 ************************************ 00:25:06.881 END TEST dma 00:25:06.881 ************************************ 00:25:06.881 17:28:03 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:25:06.881 17:28:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:06.881 17:28:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:06.881 17:28:03 -- common/autotest_common.sh@10 -- # set +x 00:25:06.881 ************************************ 00:25:06.881 START TEST nvmf_identify 00:25:06.881 ************************************ 00:25:06.881 17:28:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:25:07.141 * Looking for test storage... 00:25:07.141 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:07.141 17:28:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:07.141 17:28:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:07.141 17:28:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:07.141 17:28:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:07.141 17:28:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:07.141 17:28:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:07.141 17:28:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:07.141 17:28:03 -- scripts/common.sh@335 -- # IFS=.-: 00:25:07.141 17:28:03 -- scripts/common.sh@335 -- # read -ra ver1 00:25:07.141 17:28:03 -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.141 17:28:03 -- scripts/common.sh@336 -- # read -ra ver2 00:25:07.141 17:28:03 -- scripts/common.sh@337 -- # local 'op=<' 00:25:07.141 17:28:03 -- scripts/common.sh@339 -- # ver1_l=2 00:25:07.141 17:28:03 -- scripts/common.sh@340 -- # ver2_l=1 00:25:07.141 17:28:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:07.141 17:28:03 -- scripts/common.sh@343 -- # case "$op" in 00:25:07.141 17:28:03 -- scripts/common.sh@344 -- # : 1 00:25:07.141 17:28:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:07.141 17:28:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.141 17:28:03 -- scripts/common.sh@364 -- # decimal 1 00:25:07.141 17:28:03 -- scripts/common.sh@352 -- # local d=1 00:25:07.141 17:28:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.141 17:28:03 -- scripts/common.sh@354 -- # echo 1 00:25:07.141 17:28:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:07.141 17:28:03 -- scripts/common.sh@365 -- # decimal 2 00:25:07.141 17:28:03 -- scripts/common.sh@352 -- # local d=2 00:25:07.141 17:28:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.141 17:28:03 -- scripts/common.sh@354 -- # echo 2 00:25:07.141 17:28:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:07.141 17:28:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:07.141 17:28:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:07.141 17:28:03 -- scripts/common.sh@367 -- # return 0 00:25:07.141 17:28:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.141 17:28:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:07.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.141 --rc genhtml_branch_coverage=1 00:25:07.141 --rc genhtml_function_coverage=1 00:25:07.141 --rc genhtml_legend=1 00:25:07.141 --rc geninfo_all_blocks=1 00:25:07.141 --rc geninfo_unexecuted_blocks=1 00:25:07.141 00:25:07.141 ' 00:25:07.141 17:28:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:07.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.141 --rc genhtml_branch_coverage=1 00:25:07.141 --rc genhtml_function_coverage=1 00:25:07.141 --rc genhtml_legend=1 00:25:07.141 --rc geninfo_all_blocks=1 00:25:07.141 --rc geninfo_unexecuted_blocks=1 00:25:07.141 00:25:07.141 ' 00:25:07.141 17:28:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:07.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.141 --rc genhtml_branch_coverage=1 00:25:07.141 --rc genhtml_function_coverage=1 00:25:07.141 --rc genhtml_legend=1 00:25:07.141 --rc geninfo_all_blocks=1 00:25:07.141 --rc geninfo_unexecuted_blocks=1 00:25:07.141 00:25:07.141 ' 00:25:07.141 17:28:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:07.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.141 --rc genhtml_branch_coverage=1 00:25:07.141 --rc genhtml_function_coverage=1 00:25:07.141 --rc genhtml_legend=1 00:25:07.141 --rc geninfo_all_blocks=1 00:25:07.141 --rc geninfo_unexecuted_blocks=1 00:25:07.141 00:25:07.141 ' 00:25:07.141 17:28:03 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.141 17:28:03 -- nvmf/common.sh@7 -- # uname -s 00:25:07.141 17:28:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.141 17:28:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.141 17:28:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.141 17:28:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.141 17:28:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.141 17:28:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.141 17:28:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.141 17:28:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.141 17:28:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.141 17:28:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.141 17:28:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:07.141 17:28:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:07.141 17:28:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.141 17:28:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.141 17:28:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.141 17:28:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:07.141 17:28:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.141 17:28:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.141 17:28:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.141 17:28:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.141 17:28:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.141 17:28:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.141 17:28:03 -- paths/export.sh@5 -- # export PATH 00:25:07.141 17:28:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.141 17:28:03 -- nvmf/common.sh@46 -- # : 0 00:25:07.141 17:28:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:07.141 17:28:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:07.141 17:28:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:07.141 17:28:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.141 17:28:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.141 17:28:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:07.141 17:28:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:07.141 17:28:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:07.141 17:28:03 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:07.141 17:28:03 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:07.141 17:28:03 -- host/identify.sh@14 -- # nvmftestinit 00:25:07.141 17:28:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:07.141 17:28:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.141 17:28:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:07.141 17:28:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:07.141 17:28:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:07.141 17:28:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.141 17:28:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.141 17:28:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.141 17:28:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:07.141 17:28:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:07.141 17:28:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:07.141 17:28:03 -- common/autotest_common.sh@10 -- # set +x 00:25:13.773 17:28:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:13.773 17:28:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:13.773 17:28:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:13.773 17:28:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:13.773 17:28:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:13.773 17:28:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:13.773 17:28:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:13.773 17:28:09 -- nvmf/common.sh@294 -- # net_devs=() 00:25:13.773 17:28:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:13.773 17:28:09 -- nvmf/common.sh@295 -- # e810=() 00:25:13.773 17:28:09 -- nvmf/common.sh@295 -- # local -ga e810 00:25:13.773 17:28:09 -- nvmf/common.sh@296 -- # x722=() 00:25:13.773 17:28:09 -- nvmf/common.sh@296 -- # local -ga x722 00:25:13.773 17:28:09 -- nvmf/common.sh@297 -- # mlx=() 00:25:13.773 17:28:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:13.773 17:28:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.773 17:28:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.773 17:28:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.773 17:28:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.773 17:28:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.773 17:28:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.773 17:28:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.773 17:28:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.773 17:28:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.773 17:28:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.773 17:28:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.773 17:28:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:13.773 17:28:09 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:13.773 17:28:09 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:13.773 17:28:09 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:13.773 17:28:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:13.773 17:28:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:13.773 17:28:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:13.773 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:13.773 17:28:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:13.773 17:28:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:13.773 17:28:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:13.773 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:13.773 17:28:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:13.773 17:28:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:13.773 17:28:09 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:13.773 17:28:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.773 17:28:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:13.773 17:28:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.773 17:28:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:13.773 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:13.773 17:28:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.773 17:28:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:13.773 17:28:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.773 17:28:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:13.773 17:28:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.773 17:28:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:13.773 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:13.773 17:28:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.773 17:28:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:13.773 17:28:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:13.773 17:28:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:13.773 17:28:09 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:13.773 17:28:09 -- nvmf/common.sh@57 -- # uname 00:25:13.773 17:28:09 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:13.773 17:28:09 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:13.773 17:28:09 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:13.773 17:28:09 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:13.773 17:28:09 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:13.773 17:28:09 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:13.773 17:28:09 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:13.773 17:28:09 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:13.773 17:28:09 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:13.773 17:28:09 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:13.773 17:28:09 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:13.773 17:28:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:13.773 17:28:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:13.773 17:28:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:13.773 17:28:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:13.773 17:28:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:13.773 17:28:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:13.773 17:28:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:13.773 17:28:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:13.773 17:28:09 -- nvmf/common.sh@104 -- # continue 2 00:25:13.773 17:28:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:13.773 17:28:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:13.773 17:28:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:13.773 17:28:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:13.773 17:28:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:13.773 17:28:09 -- nvmf/common.sh@104 -- # continue 2 00:25:13.773 17:28:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:13.774 17:28:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:13.774 17:28:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:13.774 17:28:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:13.774 17:28:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:13.774 17:28:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:13.774 17:28:09 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:13.774 17:28:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:13.774 17:28:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:13.774 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:13.774 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:13.774 altname enp217s0f0np0 00:25:13.774 altname ens818f0np0 00:25:13.774 inet 192.168.100.8/24 scope global mlx_0_0 00:25:13.774 valid_lft forever preferred_lft forever 00:25:13.774 17:28:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:13.774 17:28:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:13.774 17:28:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:13.774 17:28:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:13.774 17:28:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:13.774 17:28:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:13.774 17:28:09 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:13.774 17:28:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:13.774 17:28:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:13.774 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:13.774 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:13.774 altname enp217s0f1np1 00:25:13.774 altname ens818f1np1 00:25:13.774 inet 192.168.100.9/24 scope global mlx_0_1 00:25:13.774 valid_lft forever preferred_lft forever 00:25:13.774 17:28:09 -- nvmf/common.sh@410 -- # return 0 00:25:13.774 17:28:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:13.774 17:28:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:13.774 17:28:09 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:13.774 17:28:09 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:13.774 17:28:09 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:13.774 17:28:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:13.774 17:28:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:13.774 17:28:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:13.774 17:28:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:13.774 17:28:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:13.774 17:28:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:13.774 17:28:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:13.774 17:28:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:13.774 17:28:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:13.774 17:28:09 -- nvmf/common.sh@104 -- # continue 2 00:25:13.774 17:28:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:13.774 17:28:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:13.774 17:28:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:13.774 17:28:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:13.774 17:28:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:13.774 17:28:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:13.774 17:28:09 -- nvmf/common.sh@104 -- # continue 2 00:25:13.774 17:28:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:13.774 17:28:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:13.774 17:28:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:13.774 17:28:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:13.774 17:28:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:13.774 17:28:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:13.774 17:28:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:13.774 17:28:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:13.774 17:28:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:13.774 17:28:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:13.774 17:28:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:13.774 17:28:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:13.774 17:28:09 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:13.774 192.168.100.9' 00:25:13.774 17:28:09 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:13.774 192.168.100.9' 00:25:13.774 17:28:09 -- nvmf/common.sh@445 -- # head -n 1 00:25:13.774 17:28:09 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:13.774 17:28:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:13.774 192.168.100.9' 00:25:13.774 17:28:09 -- nvmf/common.sh@446 -- # tail -n +2 00:25:13.774 17:28:09 -- nvmf/common.sh@446 -- # head -n 1 00:25:13.774 17:28:09 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:13.774 17:28:09 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:13.774 17:28:09 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:13.774 17:28:09 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:13.774 17:28:09 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:13.774 17:28:09 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:13.774 17:28:09 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:13.774 17:28:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.774 17:28:09 -- common/autotest_common.sh@10 -- # set +x 00:25:13.774 17:28:09 -- host/identify.sh@19 -- # nvmfpid=1456503 00:25:13.774 17:28:09 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:13.774 17:28:09 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:13.774 17:28:09 -- host/identify.sh@23 -- # waitforlisten 1456503 00:25:13.774 17:28:09 -- common/autotest_common.sh@829 -- # '[' -z 1456503 ']' 00:25:13.774 17:28:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.774 17:28:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.774 17:28:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.774 17:28:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.774 17:28:09 -- common/autotest_common.sh@10 -- # set +x 00:25:13.774 [2024-12-14 17:28:10.034357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:13.774 [2024-12-14 17:28:10.034417] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.774 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.774 [2024-12-14 17:28:10.106782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:13.774 [2024-12-14 17:28:10.147438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:13.774 [2024-12-14 17:28:10.147557] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.774 [2024-12-14 17:28:10.147568] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.774 [2024-12-14 17:28:10.147577] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.774 [2024-12-14 17:28:10.147640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.774 [2024-12-14 17:28:10.147746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.774 [2024-12-14 17:28:10.147830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.774 [2024-12-14 17:28:10.147831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.343 17:28:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.343 17:28:10 -- common/autotest_common.sh@862 -- # return 0 00:25:14.343 17:28:10 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:14.343 17:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.343 17:28:10 -- common/autotest_common.sh@10 -- # set +x 00:25:14.343 [2024-12-14 17:28:10.887464] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14310d0/0x14355a0) succeed. 00:25:14.343 [2024-12-14 17:28:10.896656] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1432670/0x1476c40) succeed. 00:25:14.343 17:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.343 17:28:11 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:14.343 17:28:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.343 17:28:11 -- common/autotest_common.sh@10 -- # set +x 00:25:14.606 17:28:11 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:14.606 17:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.606 17:28:11 -- common/autotest_common.sh@10 -- # set +x 00:25:14.606 Malloc0 00:25:14.606 17:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.606 17:28:11 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.606 17:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.606 17:28:11 -- common/autotest_common.sh@10 -- # set +x 00:25:14.606 17:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.606 17:28:11 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:14.606 17:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.606 17:28:11 -- common/autotest_common.sh@10 -- # set +x 00:25:14.606 17:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.606 17:28:11 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:14.606 17:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.606 17:28:11 -- common/autotest_common.sh@10 -- # set +x 00:25:14.606 [2024-12-14 17:28:11.108869] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:14.606 17:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.606 17:28:11 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:14.606 17:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.606 17:28:11 -- common/autotest_common.sh@10 -- # set +x 00:25:14.606 17:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.606 17:28:11 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:14.606 17:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.606 17:28:11 -- common/autotest_common.sh@10 -- # set +x 00:25:14.606 [2024-12-14 17:28:11.124519] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:14.606 [ 00:25:14.606 { 00:25:14.606 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:14.606 "subtype": "Discovery", 00:25:14.606 "listen_addresses": [ 00:25:14.606 { 00:25:14.606 "transport": "RDMA", 00:25:14.606 "trtype": "RDMA", 00:25:14.606 "adrfam": "IPv4", 00:25:14.606 "traddr": "192.168.100.8", 00:25:14.606 "trsvcid": "4420" 00:25:14.606 } 00:25:14.606 ], 00:25:14.606 "allow_any_host": true, 00:25:14.606 "hosts": [] 00:25:14.606 }, 00:25:14.606 { 00:25:14.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.606 "subtype": "NVMe", 00:25:14.606 "listen_addresses": [ 00:25:14.606 { 00:25:14.606 "transport": "RDMA", 00:25:14.606 "trtype": "RDMA", 00:25:14.606 "adrfam": "IPv4", 00:25:14.606 "traddr": "192.168.100.8", 00:25:14.606 "trsvcid": "4420" 00:25:14.606 } 00:25:14.606 ], 00:25:14.606 "allow_any_host": true, 00:25:14.606 "hosts": [], 00:25:14.606 "serial_number": "SPDK00000000000001", 00:25:14.606 "model_number": "SPDK bdev Controller", 00:25:14.606 "max_namespaces": 32, 00:25:14.606 "min_cntlid": 1, 00:25:14.606 "max_cntlid": 65519, 00:25:14.606 "namespaces": [ 00:25:14.606 { 00:25:14.606 "nsid": 1, 00:25:14.606 "bdev_name": "Malloc0", 00:25:14.606 "name": "Malloc0", 00:25:14.606 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:14.606 "eui64": "ABCDEF0123456789", 00:25:14.606 "uuid": "6a59c3cc-4f6b-40c5-9bd7-3099b9e5ab1f" 00:25:14.606 } 00:25:14.606 ] 00:25:14.606 } 00:25:14.606 ] 00:25:14.606 17:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.606 17:28:11 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:14.606 [2024-12-14 17:28:11.167313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:14.606 [2024-12-14 17:28:11.167366] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1456790 ] 00:25:14.606 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.606 [2024-12-14 17:28:11.215737] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:14.606 [2024-12-14 17:28:11.215808] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:14.606 [2024-12-14 17:28:11.215828] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:14.606 [2024-12-14 17:28:11.215833] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:14.606 [2024-12-14 17:28:11.215865] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:14.606 [2024-12-14 17:28:11.227010] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:14.606 [2024-12-14 17:28:11.237089] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:14.607 [2024-12-14 17:28:11.237101] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:14.607 [2024-12-14 17:28:11.237108] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237115] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237121] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237131] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237137] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237143] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237149] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237155] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237161] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237167] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237173] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237179] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237186] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237192] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237198] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237204] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237210] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237216] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237222] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237228] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237234] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237240] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237246] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237252] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237259] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237265] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237271] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237277] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237283] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237289] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237295] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237301] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:14.607 [2024-12-14 17:28:11.237306] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:14.607 [2024-12-14 17:28:11.237310] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:14.607 [2024-12-14 17:28:11.237328] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.237342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x184100 00:25:14.607 [2024-12-14 17:28:11.242503] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.607 [2024-12-14 17:28:11.242512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:14.607 [2024-12-14 17:28:11.242520] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.242529] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:14.607 [2024-12-14 17:28:11.242536] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:14.607 [2024-12-14 17:28:11.242542] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:14.607 [2024-12-14 17:28:11.242556] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.242564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.607 [2024-12-14 17:28:11.242586] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.607 [2024-12-14 17:28:11.242592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:14.607 [2024-12-14 17:28:11.242599] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:14.607 [2024-12-14 17:28:11.242605] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.242611] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:14.607 [2024-12-14 17:28:11.242619] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.242627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.607 [2024-12-14 17:28:11.242649] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.607 [2024-12-14 17:28:11.242655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:14.607 [2024-12-14 17:28:11.242661] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:14.607 [2024-12-14 17:28:11.242667] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.242674] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:14.607 [2024-12-14 17:28:11.242682] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.242689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.607 [2024-12-14 17:28:11.242707] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.607 [2024-12-14 17:28:11.242712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:14.607 [2024-12-14 17:28:11.242719] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:14.607 [2024-12-14 17:28:11.242725] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.242733] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.242743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.607 [2024-12-14 17:28:11.242759] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.607 [2024-12-14 17:28:11.242764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:14.607 [2024-12-14 17:28:11.242770] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:14.607 [2024-12-14 17:28:11.242776] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:14.607 [2024-12-14 17:28:11.242782] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.242789] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:14.607 [2024-12-14 17:28:11.242896] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:14.607 [2024-12-14 17:28:11.242902] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:14.607 [2024-12-14 17:28:11.242911] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.242918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.607 [2024-12-14 17:28:11.242938] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.607 [2024-12-14 17:28:11.242943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:14.607 [2024-12-14 17:28:11.242950] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:14.607 [2024-12-14 17:28:11.242955] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.242963] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.242971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.607 [2024-12-14 17:28:11.242990] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.607 [2024-12-14 17:28:11.242996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:14.607 [2024-12-14 17:28:11.243002] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:14.607 [2024-12-14 17:28:11.243007] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:14.607 [2024-12-14 17:28:11.243013] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.243020] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:14.607 [2024-12-14 17:28:11.243033] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:14.607 [2024-12-14 17:28:11.243043] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.607 [2024-12-14 17:28:11.243050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:14.607 [2024-12-14 17:28:11.243080] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.608 [2024-12-14 17:28:11.243086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:14.608 [2024-12-14 17:28:11.243095] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:14.608 [2024-12-14 17:28:11.243101] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:14.608 [2024-12-14 17:28:11.243107] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:14.608 [2024-12-14 17:28:11.243113] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:14.608 [2024-12-14 17:28:11.243118] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:14.608 [2024-12-14 17:28:11.243124] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:14.608 [2024-12-14 17:28:11.243130] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243140] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:14.608 [2024-12-14 17:28:11.243147] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.608 [2024-12-14 17:28:11.243174] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.608 [2024-12-14 17:28:11.243180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:14.608 [2024-12-14 17:28:11.243188] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.608 [2024-12-14 17:28:11.243202] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.608 [2024-12-14 17:28:11.243216] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.608 [2024-12-14 17:28:11.243229] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.608 [2024-12-14 17:28:11.243242] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:14.608 [2024-12-14 17:28:11.243248] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:14.608 [2024-12-14 17:28:11.243265] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.608 [2024-12-14 17:28:11.243294] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.608 [2024-12-14 17:28:11.243303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:14.608 [2024-12-14 17:28:11.243310] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:14.608 [2024-12-14 17:28:11.243316] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:14.608 [2024-12-14 17:28:11.243322] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243330] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:14.608 [2024-12-14 17:28:11.243361] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.608 [2024-12-14 17:28:11.243367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:14.608 [2024-12-14 17:28:11.243374] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243384] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:14.608 [2024-12-14 17:28:11.243405] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x184100 00:25:14.608 [2024-12-14 17:28:11.243421] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.608 [2024-12-14 17:28:11.243445] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.608 [2024-12-14 17:28:11.243451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:14.608 [2024-12-14 17:28:11.243462] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x184100 00:25:14.608 [2024-12-14 17:28:11.243476] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243482] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.608 [2024-12-14 17:28:11.243487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:14.608 [2024-12-14 17:28:11.243493] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243505] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.608 [2024-12-14 17:28:11.243510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:14.608 [2024-12-14 17:28:11.243519] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x184100 00:25:14.608 [2024-12-14 17:28:11.243533] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:14.608 [2024-12-14 17:28:11.243556] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.608 [2024-12-14 17:28:11.243562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.608 [2024-12-14 17:28:11.243573] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:14.608 ===================================================== 00:25:14.608 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:14.608 ===================================================== 00:25:14.608 Controller Capabilities/Features 00:25:14.608 ================================ 00:25:14.608 Vendor ID: 0000 00:25:14.608 Subsystem Vendor ID: 0000 00:25:14.608 Serial Number: .................... 00:25:14.608 Model Number: ........................................ 00:25:14.608 Firmware Version: 24.01.1 00:25:14.608 Recommended Arb Burst: 0 00:25:14.608 IEEE OUI Identifier: 00 00 00 00:25:14.608 Multi-path I/O 00:25:14.608 May have multiple subsystem ports: No 00:25:14.608 May have multiple controllers: No 00:25:14.608 Associated with SR-IOV VF: No 00:25:14.608 Max Data Transfer Size: 131072 00:25:14.608 Max Number of Namespaces: 0 00:25:14.608 Max Number of I/O Queues: 1024 00:25:14.608 NVMe Specification Version (VS): 1.3 00:25:14.608 NVMe Specification Version (Identify): 1.3 00:25:14.608 Maximum Queue Entries: 128 00:25:14.608 Contiguous Queues Required: Yes 00:25:14.608 Arbitration Mechanisms Supported 00:25:14.608 Weighted Round Robin: Not Supported 00:25:14.608 Vendor Specific: Not Supported 00:25:14.608 Reset Timeout: 15000 ms 00:25:14.608 Doorbell Stride: 4 bytes 00:25:14.608 NVM Subsystem Reset: Not Supported 00:25:14.608 Command Sets Supported 00:25:14.608 NVM Command Set: Supported 00:25:14.608 Boot Partition: Not Supported 00:25:14.608 Memory Page Size Minimum: 4096 bytes 00:25:14.608 Memory Page Size Maximum: 4096 bytes 00:25:14.608 Persistent Memory Region: Not Supported 00:25:14.608 Optional Asynchronous Events Supported 00:25:14.608 Namespace Attribute Notices: Not Supported 00:25:14.608 Firmware Activation Notices: Not Supported 00:25:14.608 ANA Change Notices: Not Supported 00:25:14.608 PLE Aggregate Log Change Notices: Not Supported 00:25:14.608 LBA Status Info Alert Notices: Not Supported 00:25:14.608 EGE Aggregate Log Change Notices: Not Supported 00:25:14.608 Normal NVM Subsystem Shutdown event: Not Supported 00:25:14.608 Zone Descriptor Change Notices: Not Supported 00:25:14.608 Discovery Log Change Notices: Supported 00:25:14.608 Controller Attributes 00:25:14.608 128-bit Host Identifier: Not Supported 00:25:14.608 Non-Operational Permissive Mode: Not Supported 00:25:14.608 NVM Sets: Not Supported 00:25:14.608 Read Recovery Levels: Not Supported 00:25:14.608 Endurance Groups: Not Supported 00:25:14.608 Predictable Latency Mode: Not Supported 00:25:14.608 Traffic Based Keep ALive: Not Supported 00:25:14.609 Namespace Granularity: Not Supported 00:25:14.609 SQ Associations: Not Supported 00:25:14.609 UUID List: Not Supported 00:25:14.609 Multi-Domain Subsystem: Not Supported 00:25:14.609 Fixed Capacity Management: Not Supported 00:25:14.609 Variable Capacity Management: Not Supported 00:25:14.609 Delete Endurance Group: Not Supported 00:25:14.609 Delete NVM Set: Not Supported 00:25:14.609 Extended LBA Formats Supported: Not Supported 00:25:14.609 Flexible Data Placement Supported: Not Supported 00:25:14.609 00:25:14.609 Controller Memory Buffer Support 00:25:14.609 ================================ 00:25:14.609 Supported: No 00:25:14.609 00:25:14.609 Persistent Memory Region Support 00:25:14.609 ================================ 00:25:14.609 Supported: No 00:25:14.609 00:25:14.609 Admin Command Set Attributes 00:25:14.609 ============================ 00:25:14.609 Security Send/Receive: Not Supported 00:25:14.609 Format NVM: Not Supported 00:25:14.609 Firmware Activate/Download: Not Supported 00:25:14.609 Namespace Management: Not Supported 00:25:14.609 Device Self-Test: Not Supported 00:25:14.609 Directives: Not Supported 00:25:14.609 NVMe-MI: Not Supported 00:25:14.609 Virtualization Management: Not Supported 00:25:14.609 Doorbell Buffer Config: Not Supported 00:25:14.609 Get LBA Status Capability: Not Supported 00:25:14.609 Command & Feature Lockdown Capability: Not Supported 00:25:14.609 Abort Command Limit: 1 00:25:14.609 Async Event Request Limit: 4 00:25:14.609 Number of Firmware Slots: N/A 00:25:14.609 Firmware Slot 1 Read-Only: N/A 00:25:14.609 Firmware Activation Without Reset: N/A 00:25:14.609 Multiple Update Detection Support: N/A 00:25:14.609 Firmware Update Granularity: No Information Provided 00:25:14.609 Per-Namespace SMART Log: No 00:25:14.609 Asymmetric Namespace Access Log Page: Not Supported 00:25:14.609 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:14.609 Command Effects Log Page: Not Supported 00:25:14.609 Get Log Page Extended Data: Supported 00:25:14.609 Telemetry Log Pages: Not Supported 00:25:14.609 Persistent Event Log Pages: Not Supported 00:25:14.609 Supported Log Pages Log Page: May Support 00:25:14.609 Commands Supported & Effects Log Page: Not Supported 00:25:14.609 Feature Identifiers & Effects Log Page:May Support 00:25:14.609 NVMe-MI Commands & Effects Log Page: May Support 00:25:14.609 Data Area 4 for Telemetry Log: Not Supported 00:25:14.609 Error Log Page Entries Supported: 128 00:25:14.609 Keep Alive: Not Supported 00:25:14.609 00:25:14.609 NVM Command Set Attributes 00:25:14.609 ========================== 00:25:14.609 Submission Queue Entry Size 00:25:14.609 Max: 1 00:25:14.609 Min: 1 00:25:14.609 Completion Queue Entry Size 00:25:14.609 Max: 1 00:25:14.609 Min: 1 00:25:14.609 Number of Namespaces: 0 00:25:14.609 Compare Command: Not Supported 00:25:14.609 Write Uncorrectable Command: Not Supported 00:25:14.609 Dataset Management Command: Not Supported 00:25:14.609 Write Zeroes Command: Not Supported 00:25:14.609 Set Features Save Field: Not Supported 00:25:14.609 Reservations: Not Supported 00:25:14.609 Timestamp: Not Supported 00:25:14.609 Copy: Not Supported 00:25:14.609 Volatile Write Cache: Not Present 00:25:14.609 Atomic Write Unit (Normal): 1 00:25:14.609 Atomic Write Unit (PFail): 1 00:25:14.609 Atomic Compare & Write Unit: 1 00:25:14.609 Fused Compare & Write: Supported 00:25:14.609 Scatter-Gather List 00:25:14.609 SGL Command Set: Supported 00:25:14.609 SGL Keyed: Supported 00:25:14.609 SGL Bit Bucket Descriptor: Not Supported 00:25:14.609 SGL Metadata Pointer: Not Supported 00:25:14.609 Oversized SGL: Not Supported 00:25:14.609 SGL Metadata Address: Not Supported 00:25:14.609 SGL Offset: Supported 00:25:14.609 Transport SGL Data Block: Not Supported 00:25:14.609 Replay Protected Memory Block: Not Supported 00:25:14.609 00:25:14.609 Firmware Slot Information 00:25:14.609 ========================= 00:25:14.609 Active slot: 0 00:25:14.609 00:25:14.609 00:25:14.609 Error Log 00:25:14.609 ========= 00:25:14.609 00:25:14.609 Active Namespaces 00:25:14.609 ================= 00:25:14.609 Discovery Log Page 00:25:14.609 ================== 00:25:14.609 Generation Counter: 2 00:25:14.609 Number of Records: 2 00:25:14.609 Record Format: 0 00:25:14.609 00:25:14.609 Discovery Log Entry 0 00:25:14.609 ---------------------- 00:25:14.609 Transport Type: 1 (RDMA) 00:25:14.609 Address Family: 1 (IPv4) 00:25:14.609 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:14.609 Entry Flags: 00:25:14.609 Duplicate Returned Information: 1 00:25:14.609 Explicit Persistent Connection Support for Discovery: 1 00:25:14.609 Transport Requirements: 00:25:14.609 Secure Channel: Not Required 00:25:14.609 Port ID: 0 (0x0000) 00:25:14.609 Controller ID: 65535 (0xffff) 00:25:14.609 Admin Max SQ Size: 128 00:25:14.609 Transport Service Identifier: 4420 00:25:14.609 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:14.609 Transport Address: 192.168.100.8 00:25:14.609 Transport Specific Address Subtype - RDMA 00:25:14.609 RDMA QP Service Type: 1 (Reliable Connected) 00:25:14.609 RDMA Provider Type: 1 (No provider specified) 00:25:14.609 RDMA CM Service: 1 (RDMA_CM) 00:25:14.609 Discovery Log Entry 1 00:25:14.609 ---------------------- 00:25:14.609 Transport Type: 1 (RDMA) 00:25:14.609 Address Family: 1 (IPv4) 00:25:14.609 Subsystem Type: 2 (NVM Subsystem) 00:25:14.609 Entry Flags: 00:25:14.609 Duplicate Returned Information: 0 00:25:14.609 Explicit Persistent Connection Support for Discovery: 0 00:25:14.609 Transport Requirements: 00:25:14.609 Secure Channel: Not Required 00:25:14.609 Port ID: 0 (0x0000) 00:25:14.609 Controller ID: 65535 (0xffff) 00:25:14.609 Admin Max SQ Size: [2024-12-14 17:28:11.243643] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:14.609 [2024-12-14 17:28:11.243654] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 55963 doesn't match qid 00:25:14.609 [2024-12-14 17:28:11.243668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32652 cdw0:5 sqhd:3e28 p:0 m:0 dnr:0 00:25:14.609 [2024-12-14 17:28:11.243675] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 55963 doesn't match qid 00:25:14.609 [2024-12-14 17:28:11.243683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32652 cdw0:5 sqhd:3e28 p:0 m:0 dnr:0 00:25:14.609 [2024-12-14 17:28:11.243689] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 55963 doesn't match qid 00:25:14.609 [2024-12-14 17:28:11.243696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32652 cdw0:5 sqhd:3e28 p:0 m:0 dnr:0 00:25:14.609 [2024-12-14 17:28:11.243703] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 55963 doesn't match qid 00:25:14.609 [2024-12-14 17:28:11.243710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32652 cdw0:5 sqhd:3e28 p:0 m:0 dnr:0 00:25:14.609 [2024-12-14 17:28:11.243719] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:14.609 [2024-12-14 17:28:11.243726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.609 [2024-12-14 17:28:11.243747] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.609 [2024-12-14 17:28:11.243753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:25:14.609 [2024-12-14 17:28:11.243761] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.609 [2024-12-14 17:28:11.243768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.609 [2024-12-14 17:28:11.243774] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:14.609 [2024-12-14 17:28:11.243794] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.609 [2024-12-14 17:28:11.243800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.609 [2024-12-14 17:28:11.243807] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:14.609 [2024-12-14 17:28:11.243812] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:14.609 [2024-12-14 17:28:11.243819] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:14.609 [2024-12-14 17:28:11.243827] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.609 [2024-12-14 17:28:11.243834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.609 [2024-12-14 17:28:11.243850] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.609 [2024-12-14 17:28:11.243856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:14.609 [2024-12-14 17:28:11.243863] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:14.609 [2024-12-14 17:28:11.243871] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.609 [2024-12-14 17:28:11.243881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.609 [2024-12-14 17:28:11.243899] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.609 [2024-12-14 17:28:11.243904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:14.609 [2024-12-14 17:28:11.243910] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:14.609 [2024-12-14 17:28:11.243919] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.243927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.243945] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.243950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.243956] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.243965] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.243973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.243994] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244007] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244015] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244037] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244049] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244058] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244081] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244093] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244102] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244135] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244146] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244156] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244184] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244195] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244204] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244231] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244243] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244251] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244274] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244286] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244294] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244317] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244329] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244337] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244363] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244374] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244383] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244412] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244423] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244433] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244454] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244466] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244475] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244502] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244514] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244523] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244548] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244560] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244568] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244595] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244607] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244615] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244637] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244648] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244657] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.610 [2024-12-14 17:28:11.244684] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.610 [2024-12-14 17:28:11.244689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:14.610 [2024-12-14 17:28:11.244699] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244707] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.610 [2024-12-14 17:28:11.244715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.244729] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.244734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.244741] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.244749] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.244757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.244778] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.244783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.244790] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.244798] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.244806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.244827] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.244832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.244838] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.244847] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.244854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.244876] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.244881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.244887] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.244896] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.244903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.244923] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.244928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.244935] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.244943] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.244951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.244968] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.244974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.244981] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.244990] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.244997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.245013] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.245018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.245025] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245033] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.245062] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.245068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.245074] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245082] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.245111] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.245117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.245123] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245132] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.245162] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.245168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.245174] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245183] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.245210] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.245215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.245221] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245230] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.245255] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.245262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.245268] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245276] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.245302] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.245307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.245313] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245322] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.245347] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.245352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.245358] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245367] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.611 [2024-12-14 17:28:11.245374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.611 [2024-12-14 17:28:11.245398] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.611 [2024-12-14 17:28:11.245403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:14.611 [2024-12-14 17:28:11.245409] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245418] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245445] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.245456] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245465] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245486] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.245502] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245510] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245539] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.245552] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245561] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245584] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.245596] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245604] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245627] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.245639] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245647] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245678] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.245690] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245698] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245720] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.245732] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245740] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245763] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.245775] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245783] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245805] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.245816] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245825] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245848] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.245860] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245868] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245899] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.245911] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245919] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245943] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.245954] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245963] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.245970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.245994] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.245999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.246005] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.246014] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.246021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.246037] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.246042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.246049] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.246057] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.246065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.246080] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.246086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.246092] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.246100] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.246108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.246124] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.246129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.246135] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.246144] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.246151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.612 [2024-12-14 17:28:11.246169] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.612 [2024-12-14 17:28:11.246174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:14.612 [2024-12-14 17:28:11.246180] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:14.612 [2024-12-14 17:28:11.246189] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.613 [2024-12-14 17:28:11.246216] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.613 [2024-12-14 17:28:11.246221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:14.613 [2024-12-14 17:28:11.246227] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246236] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.613 [2024-12-14 17:28:11.246263] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.613 [2024-12-14 17:28:11.246268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:14.613 [2024-12-14 17:28:11.246275] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246283] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.613 [2024-12-14 17:28:11.246304] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.613 [2024-12-14 17:28:11.246310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:14.613 [2024-12-14 17:28:11.246316] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246325] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.613 [2024-12-14 17:28:11.246350] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.613 [2024-12-14 17:28:11.246355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:14.613 [2024-12-14 17:28:11.246361] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246370] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.613 [2024-12-14 17:28:11.246395] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.613 [2024-12-14 17:28:11.246400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:14.613 [2024-12-14 17:28:11.246406] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246415] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.613 [2024-12-14 17:28:11.246438] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.613 [2024-12-14 17:28:11.246444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:14.613 [2024-12-14 17:28:11.246450] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246458] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.246466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.613 [2024-12-14 17:28:11.246480] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.613 [2024-12-14 17:28:11.246485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:14.613 [2024-12-14 17:28:11.246491] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.250507] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.250515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.613 [2024-12-14 17:28:11.250539] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.613 [2024-12-14 17:28:11.250544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000c p:0 m:0 dnr:0 00:25:14.613 [2024-12-14 17:28:11.250551] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:14.613 [2024-12-14 17:28:11.250558] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:25:14.613 128 00:25:14.613 Transport Service Identifier: 4420 00:25:14.613 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:14.613 Transport Address: 192.168.100.8 00:25:14.613 Transport Specific Address Subtype - RDMA 00:25:14.613 RDMA QP Service Type: 1 (Reliable Connected) 00:25:14.613 RDMA Provider Type: 1 (No provider specified) 00:25:14.613 RDMA CM Service: 1 (RDMA_CM) 00:25:14.876 17:28:11 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:14.876 [2024-12-14 17:28:11.319718] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:14.876 [2024-12-14 17:28:11.319772] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1456797 ] 00:25:14.876 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.876 [2024-12-14 17:28:11.368060] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:14.876 [2024-12-14 17:28:11.368125] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:14.876 [2024-12-14 17:28:11.368139] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:14.876 [2024-12-14 17:28:11.368144] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:14.876 [2024-12-14 17:28:11.368166] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:14.876 [2024-12-14 17:28:11.384005] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:14.876 [2024-12-14 17:28:11.398065] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:14.876 [2024-12-14 17:28:11.398075] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:14.876 [2024-12-14 17:28:11.398081] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:14.876 [2024-12-14 17:28:11.398088] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:14.876 [2024-12-14 17:28:11.398094] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:14.876 [2024-12-14 17:28:11.398100] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:14.876 [2024-12-14 17:28:11.398106] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398112] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398118] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398124] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398131] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398137] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398143] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398149] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398155] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398161] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398167] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398173] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398179] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398185] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398193] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398200] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398206] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398212] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398218] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398224] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398230] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398236] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398242] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398248] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398254] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398260] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398266] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398272] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:14.877 [2024-12-14 17:28:11.398277] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:14.877 [2024-12-14 17:28:11.398281] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:14.877 [2024-12-14 17:28:11.398295] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.398306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x184100 00:25:14.877 [2024-12-14 17:28:11.403504] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.877 [2024-12-14 17:28:11.403512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:14.877 [2024-12-14 17:28:11.403519] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403526] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:14.877 [2024-12-14 17:28:11.403532] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:14.877 [2024-12-14 17:28:11.403539] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:14.877 [2024-12-14 17:28:11.403549] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.877 [2024-12-14 17:28:11.403575] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.877 [2024-12-14 17:28:11.403580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:14.877 [2024-12-14 17:28:11.403587] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:14.877 [2024-12-14 17:28:11.403593] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403599] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:14.877 [2024-12-14 17:28:11.403609] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.877 [2024-12-14 17:28:11.403632] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.877 [2024-12-14 17:28:11.403638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:14.877 [2024-12-14 17:28:11.403644] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:14.877 [2024-12-14 17:28:11.403650] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403657] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:14.877 [2024-12-14 17:28:11.403665] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.877 [2024-12-14 17:28:11.403688] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.877 [2024-12-14 17:28:11.403694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:14.877 [2024-12-14 17:28:11.403700] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:14.877 [2024-12-14 17:28:11.403706] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403714] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.877 [2024-12-14 17:28:11.403745] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.877 [2024-12-14 17:28:11.403751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:14.877 [2024-12-14 17:28:11.403757] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:14.877 [2024-12-14 17:28:11.403763] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:14.877 [2024-12-14 17:28:11.403768] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403775] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:14.877 [2024-12-14 17:28:11.403881] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:14.877 [2024-12-14 17:28:11.403886] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:14.877 [2024-12-14 17:28:11.403894] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.877 [2024-12-14 17:28:11.403917] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.877 [2024-12-14 17:28:11.403922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:14.877 [2024-12-14 17:28:11.403928] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:14.877 [2024-12-14 17:28:11.403935] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403944] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.877 [2024-12-14 17:28:11.403970] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.877 [2024-12-14 17:28:11.403975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:14.877 [2024-12-14 17:28:11.403981] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:14.877 [2024-12-14 17:28:11.403987] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:14.877 [2024-12-14 17:28:11.403993] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.403999] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:14.877 [2024-12-14 17:28:11.404007] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:14.877 [2024-12-14 17:28:11.404016] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.877 [2024-12-14 17:28:11.404024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:14.878 [2024-12-14 17:28:11.404064] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.878 [2024-12-14 17:28:11.404069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:14.878 [2024-12-14 17:28:11.404078] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:14.878 [2024-12-14 17:28:11.404083] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:14.878 [2024-12-14 17:28:11.404089] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:14.878 [2024-12-14 17:28:11.404094] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:14.878 [2024-12-14 17:28:11.404100] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:14.878 [2024-12-14 17:28:11.404105] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404111] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404120] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404128] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.878 [2024-12-14 17:28:11.404155] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.878 [2024-12-14 17:28:11.404161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:14.878 [2024-12-14 17:28:11.404169] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.878 [2024-12-14 17:28:11.404184] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.878 [2024-12-14 17:28:11.404197] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.878 [2024-12-14 17:28:11.404211] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.878 [2024-12-14 17:28:11.404224] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404229] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404239] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404246] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.878 [2024-12-14 17:28:11.404276] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.878 [2024-12-14 17:28:11.404281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:14.878 [2024-12-14 17:28:11.404287] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:14.878 [2024-12-14 17:28:11.404293] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404299] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404306] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404315] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404322] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.878 [2024-12-14 17:28:11.404351] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.878 [2024-12-14 17:28:11.404356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:25:14.878 [2024-12-14 17:28:11.404404] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404410] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404418] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404426] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184100 00:25:14.878 [2024-12-14 17:28:11.404459] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.878 [2024-12-14 17:28:11.404464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:14.878 [2024-12-14 17:28:11.404477] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:14.878 [2024-12-14 17:28:11.404487] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404493] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404505] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404513] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:14.878 [2024-12-14 17:28:11.404550] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.878 [2024-12-14 17:28:11.404556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:14.878 [2024-12-14 17:28:11.404568] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404574] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404582] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404590] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:14.878 [2024-12-14 17:28:11.404619] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.878 [2024-12-14 17:28:11.404625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:14.878 [2024-12-14 17:28:11.404634] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404640] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404647] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404655] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404662] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404668] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404674] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:14.878 [2024-12-14 17:28:11.404680] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:14.878 [2024-12-14 17:28:11.404686] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:14.878 [2024-12-14 17:28:11.404701] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.878 [2024-12-14 17:28:11.404716] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.878 [2024-12-14 17:28:11.404733] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.878 [2024-12-14 17:28:11.404739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.878 [2024-12-14 17:28:11.404745] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404752] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.878 [2024-12-14 17:28:11.404757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.878 [2024-12-14 17:28:11.404763] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404772] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.878 [2024-12-14 17:28:11.404796] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.878 [2024-12-14 17:28:11.404802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.878 [2024-12-14 17:28:11.404808] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:14.878 [2024-12-14 17:28:11.404817] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:14.879 [2024-12-14 17:28:11.404824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.879 [2024-12-14 17:28:11.404844] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.879 [2024-12-14 17:28:11.404850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.879 [2024-12-14 17:28:11.404856] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:14.879 [2024-12-14 17:28:11.404864] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:14.879 [2024-12-14 17:28:11.404872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.879 [2024-12-14 17:28:11.404895] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.879 [2024-12-14 17:28:11.404900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:25:14.879 [2024-12-14 17:28:11.404906] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:14.879 [2024-12-14 17:28:11.404917] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:14.879 [2024-12-14 17:28:11.404924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x184100 00:25:14.879 [2024-12-14 17:28:11.404933] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:14.879 [2024-12-14 17:28:11.404942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x184100 00:25:14.879 [2024-12-14 17:28:11.404950] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x184100 00:25:14.879 [2024-12-14 17:28:11.404957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x184100 00:25:14.879 [2024-12-14 17:28:11.404965] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x184100 00:25:14.879 [2024-12-14 17:28:11.404972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x184100 00:25:14.879 [2024-12-14 17:28:11.404981] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.879 [2024-12-14 17:28:11.404987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:14.879 [2024-12-14 17:28:11.404998] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:14.879 [2024-12-14 17:28:11.405004] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.879 [2024-12-14 17:28:11.405010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:14.879 [2024-12-14 17:28:11.405018] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:14.879 [2024-12-14 17:28:11.405025] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.879 [2024-12-14 17:28:11.405030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:14.879 [2024-12-14 17:28:11.405037] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:14.879 [2024-12-14 17:28:11.405043] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.879 [2024-12-14 17:28:11.405048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:14.879 [2024-12-14 17:28:11.405058] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:14.879 ===================================================== 00:25:14.879 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:14.879 ===================================================== 00:25:14.879 Controller Capabilities/Features 00:25:14.879 ================================ 00:25:14.879 Vendor ID: 8086 00:25:14.879 Subsystem Vendor ID: 8086 00:25:14.879 Serial Number: SPDK00000000000001 00:25:14.879 Model Number: SPDK bdev Controller 00:25:14.879 Firmware Version: 24.01.1 00:25:14.879 Recommended Arb Burst: 6 00:25:14.879 IEEE OUI Identifier: e4 d2 5c 00:25:14.879 Multi-path I/O 00:25:14.879 May have multiple subsystem ports: Yes 00:25:14.879 May have multiple controllers: Yes 00:25:14.879 Associated with SR-IOV VF: No 00:25:14.879 Max Data Transfer Size: 131072 00:25:14.879 Max Number of Namespaces: 32 00:25:14.879 Max Number of I/O Queues: 127 00:25:14.879 NVMe Specification Version (VS): 1.3 00:25:14.879 NVMe Specification Version (Identify): 1.3 00:25:14.879 Maximum Queue Entries: 128 00:25:14.879 Contiguous Queues Required: Yes 00:25:14.879 Arbitration Mechanisms Supported 00:25:14.879 Weighted Round Robin: Not Supported 00:25:14.879 Vendor Specific: Not Supported 00:25:14.879 Reset Timeout: 15000 ms 00:25:14.879 Doorbell Stride: 4 bytes 00:25:14.879 NVM Subsystem Reset: Not Supported 00:25:14.879 Command Sets Supported 00:25:14.879 NVM Command Set: Supported 00:25:14.879 Boot Partition: Not Supported 00:25:14.879 Memory Page Size Minimum: 4096 bytes 00:25:14.879 Memory Page Size Maximum: 4096 bytes 00:25:14.879 Persistent Memory Region: Not Supported 00:25:14.879 Optional Asynchronous Events Supported 00:25:14.879 Namespace Attribute Notices: Supported 00:25:14.879 Firmware Activation Notices: Not Supported 00:25:14.879 ANA Change Notices: Not Supported 00:25:14.879 PLE Aggregate Log Change Notices: Not Supported 00:25:14.879 LBA Status Info Alert Notices: Not Supported 00:25:14.879 EGE Aggregate Log Change Notices: Not Supported 00:25:14.879 Normal NVM Subsystem Shutdown event: Not Supported 00:25:14.879 Zone Descriptor Change Notices: Not Supported 00:25:14.879 Discovery Log Change Notices: Not Supported 00:25:14.879 Controller Attributes 00:25:14.879 128-bit Host Identifier: Supported 00:25:14.879 Non-Operational Permissive Mode: Not Supported 00:25:14.879 NVM Sets: Not Supported 00:25:14.879 Read Recovery Levels: Not Supported 00:25:14.879 Endurance Groups: Not Supported 00:25:14.879 Predictable Latency Mode: Not Supported 00:25:14.879 Traffic Based Keep ALive: Not Supported 00:25:14.879 Namespace Granularity: Not Supported 00:25:14.879 SQ Associations: Not Supported 00:25:14.879 UUID List: Not Supported 00:25:14.879 Multi-Domain Subsystem: Not Supported 00:25:14.879 Fixed Capacity Management: Not Supported 00:25:14.879 Variable Capacity Management: Not Supported 00:25:14.879 Delete Endurance Group: Not Supported 00:25:14.879 Delete NVM Set: Not Supported 00:25:14.879 Extended LBA Formats Supported: Not Supported 00:25:14.879 Flexible Data Placement Supported: Not Supported 00:25:14.879 00:25:14.879 Controller Memory Buffer Support 00:25:14.879 ================================ 00:25:14.879 Supported: No 00:25:14.879 00:25:14.879 Persistent Memory Region Support 00:25:14.879 ================================ 00:25:14.879 Supported: No 00:25:14.879 00:25:14.879 Admin Command Set Attributes 00:25:14.879 ============================ 00:25:14.879 Security Send/Receive: Not Supported 00:25:14.879 Format NVM: Not Supported 00:25:14.879 Firmware Activate/Download: Not Supported 00:25:14.879 Namespace Management: Not Supported 00:25:14.879 Device Self-Test: Not Supported 00:25:14.879 Directives: Not Supported 00:25:14.879 NVMe-MI: Not Supported 00:25:14.879 Virtualization Management: Not Supported 00:25:14.879 Doorbell Buffer Config: Not Supported 00:25:14.879 Get LBA Status Capability: Not Supported 00:25:14.879 Command & Feature Lockdown Capability: Not Supported 00:25:14.879 Abort Command Limit: 4 00:25:14.879 Async Event Request Limit: 4 00:25:14.879 Number of Firmware Slots: N/A 00:25:14.879 Firmware Slot 1 Read-Only: N/A 00:25:14.879 Firmware Activation Without Reset: N/A 00:25:14.879 Multiple Update Detection Support: N/A 00:25:14.879 Firmware Update Granularity: No Information Provided 00:25:14.879 Per-Namespace SMART Log: No 00:25:14.879 Asymmetric Namespace Access Log Page: Not Supported 00:25:14.879 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:14.879 Command Effects Log Page: Supported 00:25:14.879 Get Log Page Extended Data: Supported 00:25:14.879 Telemetry Log Pages: Not Supported 00:25:14.879 Persistent Event Log Pages: Not Supported 00:25:14.879 Supported Log Pages Log Page: May Support 00:25:14.879 Commands Supported & Effects Log Page: Not Supported 00:25:14.879 Feature Identifiers & Effects Log Page:May Support 00:25:14.879 NVMe-MI Commands & Effects Log Page: May Support 00:25:14.879 Data Area 4 for Telemetry Log: Not Supported 00:25:14.879 Error Log Page Entries Supported: 128 00:25:14.879 Keep Alive: Supported 00:25:14.879 Keep Alive Granularity: 10000 ms 00:25:14.879 00:25:14.880 NVM Command Set Attributes 00:25:14.880 ========================== 00:25:14.880 Submission Queue Entry Size 00:25:14.880 Max: 64 00:25:14.880 Min: 64 00:25:14.880 Completion Queue Entry Size 00:25:14.880 Max: 16 00:25:14.880 Min: 16 00:25:14.880 Number of Namespaces: 32 00:25:14.880 Compare Command: Supported 00:25:14.880 Write Uncorrectable Command: Not Supported 00:25:14.880 Dataset Management Command: Supported 00:25:14.880 Write Zeroes Command: Supported 00:25:14.880 Set Features Save Field: Not Supported 00:25:14.880 Reservations: Supported 00:25:14.880 Timestamp: Not Supported 00:25:14.880 Copy: Supported 00:25:14.880 Volatile Write Cache: Present 00:25:14.880 Atomic Write Unit (Normal): 1 00:25:14.880 Atomic Write Unit (PFail): 1 00:25:14.880 Atomic Compare & Write Unit: 1 00:25:14.880 Fused Compare & Write: Supported 00:25:14.880 Scatter-Gather List 00:25:14.880 SGL Command Set: Supported 00:25:14.880 SGL Keyed: Supported 00:25:14.880 SGL Bit Bucket Descriptor: Not Supported 00:25:14.880 SGL Metadata Pointer: Not Supported 00:25:14.880 Oversized SGL: Not Supported 00:25:14.880 SGL Metadata Address: Not Supported 00:25:14.880 SGL Offset: Supported 00:25:14.880 Transport SGL Data Block: Not Supported 00:25:14.880 Replay Protected Memory Block: Not Supported 00:25:14.880 00:25:14.880 Firmware Slot Information 00:25:14.880 ========================= 00:25:14.880 Active slot: 1 00:25:14.880 Slot 1 Firmware Revision: 24.01.1 00:25:14.880 00:25:14.880 00:25:14.880 Commands Supported and Effects 00:25:14.880 ============================== 00:25:14.880 Admin Commands 00:25:14.880 -------------- 00:25:14.880 Get Log Page (02h): Supported 00:25:14.880 Identify (06h): Supported 00:25:14.880 Abort (08h): Supported 00:25:14.880 Set Features (09h): Supported 00:25:14.880 Get Features (0Ah): Supported 00:25:14.880 Asynchronous Event Request (0Ch): Supported 00:25:14.880 Keep Alive (18h): Supported 00:25:14.880 I/O Commands 00:25:14.880 ------------ 00:25:14.880 Flush (00h): Supported LBA-Change 00:25:14.880 Write (01h): Supported LBA-Change 00:25:14.880 Read (02h): Supported 00:25:14.880 Compare (05h): Supported 00:25:14.880 Write Zeroes (08h): Supported LBA-Change 00:25:14.880 Dataset Management (09h): Supported LBA-Change 00:25:14.880 Copy (19h): Supported LBA-Change 00:25:14.880 Unknown (79h): Supported LBA-Change 00:25:14.880 Unknown (7Ah): Supported 00:25:14.880 00:25:14.880 Error Log 00:25:14.880 ========= 00:25:14.880 00:25:14.880 Arbitration 00:25:14.880 =========== 00:25:14.880 Arbitration Burst: 1 00:25:14.880 00:25:14.880 Power Management 00:25:14.880 ================ 00:25:14.880 Number of Power States: 1 00:25:14.880 Current Power State: Power State #0 00:25:14.880 Power State #0: 00:25:14.880 Max Power: 0.00 W 00:25:14.880 Non-Operational State: Operational 00:25:14.880 Entry Latency: Not Reported 00:25:14.880 Exit Latency: Not Reported 00:25:14.880 Relative Read Throughput: 0 00:25:14.880 Relative Read Latency: 0 00:25:14.880 Relative Write Throughput: 0 00:25:14.880 Relative Write Latency: 0 00:25:14.880 Idle Power: Not Reported 00:25:14.880 Active Power: Not Reported 00:25:14.880 Non-Operational Permissive Mode: Not Supported 00:25:14.880 00:25:14.880 Health Information 00:25:14.880 ================== 00:25:14.880 Critical Warnings: 00:25:14.880 Available Spare Space: OK 00:25:14.880 Temperature: OK 00:25:14.880 Device Reliability: OK 00:25:14.880 Read Only: No 00:25:14.880 Volatile Memory Backup: OK 00:25:14.880 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:14.880 Temperature Threshol[2024-12-14 17:28:11.405140] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405148] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.880 [2024-12-14 17:28:11.405163] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.880 [2024-12-14 17:28:11.405168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:14.880 [2024-12-14 17:28:11.405174] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405198] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:14.880 [2024-12-14 17:28:11.405208] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 16985 doesn't match qid 00:25:14.880 [2024-12-14 17:28:11.405221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32734 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:14.880 [2024-12-14 17:28:11.405227] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 16985 doesn't match qid 00:25:14.880 [2024-12-14 17:28:11.405235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32734 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:14.880 [2024-12-14 17:28:11.405241] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 16985 doesn't match qid 00:25:14.880 [2024-12-14 17:28:11.405249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32734 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:14.880 [2024-12-14 17:28:11.405256] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 16985 doesn't match qid 00:25:14.880 [2024-12-14 17:28:11.405263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32734 cdw0:5 sqhd:fe28 p:0 m:0 dnr:0 00:25:14.880 [2024-12-14 17:28:11.405272] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.880 [2024-12-14 17:28:11.405294] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.880 [2024-12-14 17:28:11.405300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:25:14.880 [2024-12-14 17:28:11.405308] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.880 [2024-12-14 17:28:11.405322] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405338] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.880 [2024-12-14 17:28:11.405344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:14.880 [2024-12-14 17:28:11.405350] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:14.880 [2024-12-14 17:28:11.405356] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:14.880 [2024-12-14 17:28:11.405362] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405370] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.880 [2024-12-14 17:28:11.405396] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.880 [2024-12-14 17:28:11.405401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:14.880 [2024-12-14 17:28:11.405408] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405417] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.880 [2024-12-14 17:28:11.405439] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.880 [2024-12-14 17:28:11.405445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:14.880 [2024-12-14 17:28:11.405451] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405459] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.880 [2024-12-14 17:28:11.405481] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.880 [2024-12-14 17:28:11.405486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:14.880 [2024-12-14 17:28:11.405493] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405509] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.880 [2024-12-14 17:28:11.405533] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.880 [2024-12-14 17:28:11.405539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:14.880 [2024-12-14 17:28:11.405545] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405553] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.880 [2024-12-14 17:28:11.405561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.880 [2024-12-14 17:28:11.405581] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.405587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.405593] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405602] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.405633] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.405639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.405645] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405654] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.405683] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.405689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.405695] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405704] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.405729] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.405735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.405741] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405749] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.405777] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.405782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.405790] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405798] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.405825] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.405831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.405837] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405846] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.405871] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.405876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.405882] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405891] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.405916] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.405922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.405928] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405936] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.405961] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.405967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.405973] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405982] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.405989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.406003] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.406008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.406014] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.406023] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.406031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.406045] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.406050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.406058] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.406066] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.406074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.406088] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.406093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.406099] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.406108] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.406115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.406135] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.406140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.406147] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.406155] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.406163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.406177] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.406182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.406188] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.406197] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.881 [2024-12-14 17:28:11.406204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.881 [2024-12-14 17:28:11.406220] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.881 [2024-12-14 17:28:11.406225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:14.881 [2024-12-14 17:28:11.406232] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406240] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406265] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406277] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406285] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406307] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406319] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406328] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406349] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406361] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406370] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406393] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406405] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406413] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406441] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406452] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406461] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406484] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406500] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406509] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406536] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406548] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406556] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406578] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406591] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406599] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406625] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406636] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406645] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406666] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406678] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406686] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406710] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406721] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406730] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406751] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406763] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406771] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406795] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406806] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406815] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406839] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406851] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406860] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406887] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406898] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406907] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406932] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406943] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406952] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.406973] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.406979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.406985] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.406993] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.882 [2024-12-14 17:28:11.407001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.882 [2024-12-14 17:28:11.407019] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.882 [2024-12-14 17:28:11.407024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:14.882 [2024-12-14 17:28:11.407030] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407039] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.883 [2024-12-14 17:28:11.407064] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.883 [2024-12-14 17:28:11.407069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:14.883 [2024-12-14 17:28:11.407076] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407084] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.883 [2024-12-14 17:28:11.407116] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.883 [2024-12-14 17:28:11.407122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:14.883 [2024-12-14 17:28:11.407128] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407136] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.883 [2024-12-14 17:28:11.407158] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.883 [2024-12-14 17:28:11.407163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:14.883 [2024-12-14 17:28:11.407169] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407178] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.883 [2024-12-14 17:28:11.407199] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.883 [2024-12-14 17:28:11.407205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:14.883 [2024-12-14 17:28:11.407211] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407219] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.883 [2024-12-14 17:28:11.407245] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.883 [2024-12-14 17:28:11.407250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:14.883 [2024-12-14 17:28:11.407256] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407265] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.883 [2024-12-14 17:28:11.407297] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.883 [2024-12-14 17:28:11.407303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:14.883 [2024-12-14 17:28:11.407309] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407318] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.883 [2024-12-14 17:28:11.407339] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.883 [2024-12-14 17:28:11.407344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:14.883 [2024-12-14 17:28:11.407350] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407359] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.883 [2024-12-14 17:28:11.407382] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.883 [2024-12-14 17:28:11.407387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:14.883 [2024-12-14 17:28:11.407393] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407402] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.883 [2024-12-14 17:28:11.407429] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.883 [2024-12-14 17:28:11.407435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:14.883 [2024-12-14 17:28:11.407441] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407449] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.407457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.883 [2024-12-14 17:28:11.407477] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.883 [2024-12-14 17:28:11.407482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:14.883 [2024-12-14 17:28:11.407488] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.411502] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.411511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.883 [2024-12-14 17:28:11.411526] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.883 [2024-12-14 17:28:11.411532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:25:14.883 [2024-12-14 17:28:11.411538] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:14.883 [2024-12-14 17:28:11.411545] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:25:14.883 d: 0 Kelvin (-273 Celsius) 00:25:14.883 Available Spare: 0% 00:25:14.883 Available Spare Threshold: 0% 00:25:14.883 Life Percentage Used: 0% 00:25:14.883 Data Units Read: 0 00:25:14.883 Data Units Written: 0 00:25:14.883 Host Read Commands: 0 00:25:14.883 Host Write Commands: 0 00:25:14.883 Controller Busy Time: 0 minutes 00:25:14.883 Power Cycles: 0 00:25:14.883 Power On Hours: 0 hours 00:25:14.883 Unsafe Shutdowns: 0 00:25:14.883 Unrecoverable Media Errors: 0 00:25:14.883 Lifetime Error Log Entries: 0 00:25:14.883 Warning Temperature Time: 0 minutes 00:25:14.883 Critical Temperature Time: 0 minutes 00:25:14.883 00:25:14.883 Number of Queues 00:25:14.883 ================ 00:25:14.883 Number of I/O Submission Queues: 127 00:25:14.883 Number of I/O Completion Queues: 127 00:25:14.883 00:25:14.883 Active Namespaces 00:25:14.883 ================= 00:25:14.883 Namespace ID:1 00:25:14.883 Error Recovery Timeout: Unlimited 00:25:14.883 Command Set Identifier: NVM (00h) 00:25:14.883 Deallocate: Supported 00:25:14.883 Deallocated/Unwritten Error: Not Supported 00:25:14.883 Deallocated Read Value: Unknown 00:25:14.883 Deallocate in Write Zeroes: Not Supported 00:25:14.883 Deallocated Guard Field: 0xFFFF 00:25:14.883 Flush: Supported 00:25:14.883 Reservation: Supported 00:25:14.883 Namespace Sharing Capabilities: Multiple Controllers 00:25:14.883 Size (in LBAs): 131072 (0GiB) 00:25:14.883 Capacity (in LBAs): 131072 (0GiB) 00:25:14.883 Utilization (in LBAs): 131072 (0GiB) 00:25:14.883 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:14.883 EUI64: ABCDEF0123456789 00:25:14.883 UUID: 6a59c3cc-4f6b-40c5-9bd7-3099b9e5ab1f 00:25:14.883 Thin Provisioning: Not Supported 00:25:14.883 Per-NS Atomic Units: Yes 00:25:14.883 Atomic Boundary Size (Normal): 0 00:25:14.883 Atomic Boundary Size (PFail): 0 00:25:14.883 Atomic Boundary Offset: 0 00:25:14.883 Maximum Single Source Range Length: 65535 00:25:14.883 Maximum Copy Length: 65535 00:25:14.883 Maximum Source Range Count: 1 00:25:14.883 NGUID/EUI64 Never Reused: No 00:25:14.883 Namespace Write Protected: No 00:25:14.883 Number of LBA Formats: 1 00:25:14.883 Current LBA Format: LBA Format #00 00:25:14.883 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:14.883 00:25:14.883 17:28:11 -- host/identify.sh@51 -- # sync 00:25:14.883 17:28:11 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:14.883 17:28:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.883 17:28:11 -- common/autotest_common.sh@10 -- # set +x 00:25:14.883 17:28:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.883 17:28:11 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:14.883 17:28:11 -- host/identify.sh@56 -- # nvmftestfini 00:25:14.883 17:28:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:14.883 17:28:11 -- nvmf/common.sh@116 -- # sync 00:25:14.883 17:28:11 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:14.883 17:28:11 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:14.883 17:28:11 -- nvmf/common.sh@119 -- # set +e 00:25:14.883 17:28:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:14.883 17:28:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:14.883 rmmod nvme_rdma 00:25:14.884 rmmod nvme_fabrics 00:25:14.884 17:28:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:14.884 17:28:11 -- nvmf/common.sh@123 -- # set -e 00:25:14.884 17:28:11 -- nvmf/common.sh@124 -- # return 0 00:25:14.884 17:28:11 -- nvmf/common.sh@477 -- # '[' -n 1456503 ']' 00:25:14.884 17:28:11 -- nvmf/common.sh@478 -- # killprocess 1456503 00:25:14.884 17:28:11 -- common/autotest_common.sh@936 -- # '[' -z 1456503 ']' 00:25:14.884 17:28:11 -- common/autotest_common.sh@940 -- # kill -0 1456503 00:25:14.884 17:28:11 -- common/autotest_common.sh@941 -- # uname 00:25:14.884 17:28:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:14.884 17:28:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1456503 00:25:15.143 17:28:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:15.143 17:28:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:15.143 17:28:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1456503' 00:25:15.143 killing process with pid 1456503 00:25:15.143 17:28:11 -- common/autotest_common.sh@955 -- # kill 1456503 00:25:15.143 [2024-12-14 17:28:11.576065] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:15.143 17:28:11 -- common/autotest_common.sh@960 -- # wait 1456503 00:25:15.403 17:28:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:15.403 17:28:11 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:15.403 00:25:15.403 real 0m8.289s 00:25:15.403 user 0m8.324s 00:25:15.403 sys 0m5.401s 00:25:15.403 17:28:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:15.403 17:28:11 -- common/autotest_common.sh@10 -- # set +x 00:25:15.403 ************************************ 00:25:15.403 END TEST nvmf_identify 00:25:15.403 ************************************ 00:25:15.403 17:28:11 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:15.403 17:28:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:15.403 17:28:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:15.403 17:28:11 -- common/autotest_common.sh@10 -- # set +x 00:25:15.403 ************************************ 00:25:15.403 START TEST nvmf_perf 00:25:15.403 ************************************ 00:25:15.403 17:28:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:15.403 * Looking for test storage... 00:25:15.403 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:15.403 17:28:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:15.403 17:28:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:15.403 17:28:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:15.403 17:28:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:15.403 17:28:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:15.403 17:28:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:15.403 17:28:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:15.403 17:28:12 -- scripts/common.sh@335 -- # IFS=.-: 00:25:15.403 17:28:12 -- scripts/common.sh@335 -- # read -ra ver1 00:25:15.403 17:28:12 -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.403 17:28:12 -- scripts/common.sh@336 -- # read -ra ver2 00:25:15.403 17:28:12 -- scripts/common.sh@337 -- # local 'op=<' 00:25:15.403 17:28:12 -- scripts/common.sh@339 -- # ver1_l=2 00:25:15.403 17:28:12 -- scripts/common.sh@340 -- # ver2_l=1 00:25:15.403 17:28:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:15.403 17:28:12 -- scripts/common.sh@343 -- # case "$op" in 00:25:15.403 17:28:12 -- scripts/common.sh@344 -- # : 1 00:25:15.403 17:28:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:15.403 17:28:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.403 17:28:12 -- scripts/common.sh@364 -- # decimal 1 00:25:15.403 17:28:12 -- scripts/common.sh@352 -- # local d=1 00:25:15.403 17:28:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.403 17:28:12 -- scripts/common.sh@354 -- # echo 1 00:25:15.403 17:28:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:15.403 17:28:12 -- scripts/common.sh@365 -- # decimal 2 00:25:15.403 17:28:12 -- scripts/common.sh@352 -- # local d=2 00:25:15.403 17:28:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.403 17:28:12 -- scripts/common.sh@354 -- # echo 2 00:25:15.403 17:28:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:15.403 17:28:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:15.403 17:28:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:15.403 17:28:12 -- scripts/common.sh@367 -- # return 0 00:25:15.403 17:28:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.403 17:28:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:15.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.403 --rc genhtml_branch_coverage=1 00:25:15.403 --rc genhtml_function_coverage=1 00:25:15.403 --rc genhtml_legend=1 00:25:15.403 --rc geninfo_all_blocks=1 00:25:15.403 --rc geninfo_unexecuted_blocks=1 00:25:15.403 00:25:15.403 ' 00:25:15.403 17:28:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:15.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.403 --rc genhtml_branch_coverage=1 00:25:15.403 --rc genhtml_function_coverage=1 00:25:15.403 --rc genhtml_legend=1 00:25:15.403 --rc geninfo_all_blocks=1 00:25:15.403 --rc geninfo_unexecuted_blocks=1 00:25:15.403 00:25:15.403 ' 00:25:15.404 17:28:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:15.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.404 --rc genhtml_branch_coverage=1 00:25:15.404 --rc genhtml_function_coverage=1 00:25:15.404 --rc genhtml_legend=1 00:25:15.404 --rc geninfo_all_blocks=1 00:25:15.404 --rc geninfo_unexecuted_blocks=1 00:25:15.404 00:25:15.404 ' 00:25:15.404 17:28:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:15.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.404 --rc genhtml_branch_coverage=1 00:25:15.404 --rc genhtml_function_coverage=1 00:25:15.404 --rc genhtml_legend=1 00:25:15.404 --rc geninfo_all_blocks=1 00:25:15.404 --rc geninfo_unexecuted_blocks=1 00:25:15.404 00:25:15.404 ' 00:25:15.404 17:28:12 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.404 17:28:12 -- nvmf/common.sh@7 -- # uname -s 00:25:15.404 17:28:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.404 17:28:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.404 17:28:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.404 17:28:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.404 17:28:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.404 17:28:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.404 17:28:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.404 17:28:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.404 17:28:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.404 17:28:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.404 17:28:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:15.404 17:28:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:15.404 17:28:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.404 17:28:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.404 17:28:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.404 17:28:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:15.404 17:28:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.404 17:28:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.404 17:28:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.404 17:28:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.404 17:28:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.404 17:28:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.404 17:28:12 -- paths/export.sh@5 -- # export PATH 00:25:15.404 17:28:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.404 17:28:12 -- nvmf/common.sh@46 -- # : 0 00:25:15.404 17:28:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:15.404 17:28:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:15.404 17:28:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:15.404 17:28:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.664 17:28:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.664 17:28:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:15.664 17:28:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:15.664 17:28:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:15.664 17:28:12 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:15.664 17:28:12 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:15.664 17:28:12 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:25:15.664 17:28:12 -- host/perf.sh@17 -- # nvmftestinit 00:25:15.664 17:28:12 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:15.664 17:28:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.664 17:28:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:15.664 17:28:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:15.664 17:28:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:15.664 17:28:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.664 17:28:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.664 17:28:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.664 17:28:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:15.664 17:28:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:15.664 17:28:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:15.664 17:28:12 -- common/autotest_common.sh@10 -- # set +x 00:25:22.235 17:28:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:22.235 17:28:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:22.235 17:28:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:22.235 17:28:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:22.235 17:28:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:22.235 17:28:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:22.235 17:28:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:22.235 17:28:18 -- nvmf/common.sh@294 -- # net_devs=() 00:25:22.235 17:28:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:22.235 17:28:18 -- nvmf/common.sh@295 -- # e810=() 00:25:22.235 17:28:18 -- nvmf/common.sh@295 -- # local -ga e810 00:25:22.235 17:28:18 -- nvmf/common.sh@296 -- # x722=() 00:25:22.235 17:28:18 -- nvmf/common.sh@296 -- # local -ga x722 00:25:22.235 17:28:18 -- nvmf/common.sh@297 -- # mlx=() 00:25:22.235 17:28:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:22.235 17:28:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.235 17:28:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.235 17:28:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.235 17:28:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.235 17:28:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.235 17:28:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.235 17:28:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.235 17:28:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.235 17:28:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.235 17:28:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.235 17:28:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.235 17:28:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:22.235 17:28:18 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:22.235 17:28:18 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:22.235 17:28:18 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:22.235 17:28:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:22.235 17:28:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:22.235 17:28:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:22.235 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:22.235 17:28:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:22.235 17:28:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:22.235 17:28:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:22.235 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:22.235 17:28:18 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:22.235 17:28:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:22.235 17:28:18 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:22.235 17:28:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:22.235 17:28:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.235 17:28:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:22.235 17:28:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.235 17:28:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:22.235 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:22.235 17:28:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.235 17:28:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:22.235 17:28:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.235 17:28:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:22.235 17:28:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.236 17:28:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:22.236 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:22.236 17:28:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.236 17:28:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:22.236 17:28:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:22.236 17:28:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:22.236 17:28:18 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:22.236 17:28:18 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:22.236 17:28:18 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:22.236 17:28:18 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:22.236 17:28:18 -- nvmf/common.sh@57 -- # uname 00:25:22.236 17:28:18 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:22.236 17:28:18 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:22.236 17:28:18 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:22.236 17:28:18 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:22.236 17:28:18 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:22.236 17:28:18 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:22.236 17:28:18 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:22.236 17:28:18 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:22.236 17:28:18 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:22.236 17:28:18 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:22.236 17:28:18 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:22.236 17:28:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:22.236 17:28:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:22.236 17:28:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:22.236 17:28:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:22.236 17:28:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:22.236 17:28:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:22.236 17:28:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:22.236 17:28:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:22.236 17:28:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:22.236 17:28:18 -- nvmf/common.sh@104 -- # continue 2 00:25:22.236 17:28:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:22.236 17:28:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:22.236 17:28:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:22.236 17:28:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:22.236 17:28:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:22.236 17:28:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:22.236 17:28:18 -- nvmf/common.sh@104 -- # continue 2 00:25:22.236 17:28:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:22.236 17:28:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:22.236 17:28:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:22.236 17:28:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:22.236 17:28:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:22.236 17:28:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:22.236 17:28:18 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:22.236 17:28:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:22.236 17:28:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:22.236 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:22.236 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:22.236 altname enp217s0f0np0 00:25:22.236 altname ens818f0np0 00:25:22.236 inet 192.168.100.8/24 scope global mlx_0_0 00:25:22.236 valid_lft forever preferred_lft forever 00:25:22.236 17:28:18 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:22.236 17:28:18 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:22.236 17:28:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:22.236 17:28:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:22.236 17:28:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:22.236 17:28:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:22.236 17:28:18 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:22.236 17:28:18 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:22.236 17:28:18 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:22.236 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:22.236 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:22.236 altname enp217s0f1np1 00:25:22.236 altname ens818f1np1 00:25:22.236 inet 192.168.100.9/24 scope global mlx_0_1 00:25:22.236 valid_lft forever preferred_lft forever 00:25:22.236 17:28:18 -- nvmf/common.sh@410 -- # return 0 00:25:22.236 17:28:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:22.236 17:28:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:22.236 17:28:18 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:22.236 17:28:18 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:22.236 17:28:18 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:22.236 17:28:18 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:22.236 17:28:18 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:22.236 17:28:18 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:22.236 17:28:18 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:22.236 17:28:18 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:22.236 17:28:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:22.236 17:28:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:22.236 17:28:18 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:22.236 17:28:18 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:22.236 17:28:18 -- nvmf/common.sh@104 -- # continue 2 00:25:22.236 17:28:18 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:22.236 17:28:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:22.236 17:28:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:22.236 17:28:18 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:22.236 17:28:18 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:22.236 17:28:18 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:22.236 17:28:18 -- nvmf/common.sh@104 -- # continue 2 00:25:22.236 17:28:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:22.236 17:28:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:22.236 17:28:18 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:22.236 17:28:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:22.236 17:28:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:22.236 17:28:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:22.236 17:28:18 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:22.236 17:28:18 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:22.236 17:28:18 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:22.236 17:28:18 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:22.236 17:28:18 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:22.236 17:28:18 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:22.236 17:28:18 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:22.236 192.168.100.9' 00:25:22.236 17:28:18 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:22.236 192.168.100.9' 00:25:22.236 17:28:18 -- nvmf/common.sh@445 -- # head -n 1 00:25:22.236 17:28:18 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:22.236 17:28:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:22.236 192.168.100.9' 00:25:22.236 17:28:18 -- nvmf/common.sh@446 -- # tail -n +2 00:25:22.236 17:28:18 -- nvmf/common.sh@446 -- # head -n 1 00:25:22.236 17:28:18 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:22.236 17:28:18 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:22.236 17:28:18 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:22.236 17:28:18 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:22.236 17:28:18 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:22.236 17:28:18 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:22.236 17:28:18 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:22.236 17:28:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:22.236 17:28:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.236 17:28:18 -- common/autotest_common.sh@10 -- # set +x 00:25:22.496 17:28:18 -- nvmf/common.sh@469 -- # nvmfpid=1460233 00:25:22.496 17:28:18 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:22.496 17:28:18 -- nvmf/common.sh@470 -- # waitforlisten 1460233 00:25:22.496 17:28:18 -- common/autotest_common.sh@829 -- # '[' -z 1460233 ']' 00:25:22.496 17:28:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.496 17:28:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.496 17:28:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.496 17:28:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.496 17:28:18 -- common/autotest_common.sh@10 -- # set +x 00:25:22.496 [2024-12-14 17:28:18.970174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:22.496 [2024-12-14 17:28:18.970222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.496 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.496 [2024-12-14 17:28:19.039187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:22.496 [2024-12-14 17:28:19.075833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:22.496 [2024-12-14 17:28:19.075963] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.496 [2024-12-14 17:28:19.075973] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.496 [2024-12-14 17:28:19.075982] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.496 [2024-12-14 17:28:19.076096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.496 [2024-12-14 17:28:19.076194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.496 [2024-12-14 17:28:19.076256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.496 [2024-12-14 17:28:19.076257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.434 17:28:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.434 17:28:19 -- common/autotest_common.sh@862 -- # return 0 00:25:23.434 17:28:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:23.434 17:28:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.434 17:28:19 -- common/autotest_common.sh@10 -- # set +x 00:25:23.434 17:28:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.434 17:28:19 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:23.434 17:28:19 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:26.725 17:28:22 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:26.725 17:28:22 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:26.725 17:28:23 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:25:26.725 17:28:23 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:26.725 17:28:23 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:26.725 17:28:23 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:25:26.725 17:28:23 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:26.725 17:28:23 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:25:26.725 17:28:23 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:25:26.984 [2024-12-14 17:28:23.448341] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:25:26.984 [2024-12-14 17:28:23.468562] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a3b9c0/0x1a49710) succeed. 00:25:26.984 [2024-12-14 17:28:23.477878] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a3cf60/0x1a8adb0) succeed. 00:25:26.984 17:28:23 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:27.243 17:28:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:27.243 17:28:23 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:27.502 17:28:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:27.502 17:28:23 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:27.502 17:28:24 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:27.761 [2024-12-14 17:28:24.314149] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:27.761 17:28:24 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:28.021 17:28:24 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:25:28.021 17:28:24 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:28.021 17:28:24 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:28.021 17:28:24 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:29.400 Initializing NVMe Controllers 00:25:29.400 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:25:29.400 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:25:29.400 Initialization complete. Launching workers. 00:25:29.400 ======================================================== 00:25:29.400 Latency(us) 00:25:29.400 Device Information : IOPS MiB/s Average min max 00:25:29.400 PCIE (0000:d8:00.0) NSID 1 from core 0: 103770.30 405.35 308.04 29.06 5175.08 00:25:29.400 ======================================================== 00:25:29.400 Total : 103770.30 405.35 308.04 29.06 5175.08 00:25:29.400 00:25:29.400 17:28:25 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:29.400 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.692 Initializing NVMe Controllers 00:25:32.692 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:32.692 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:32.692 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:32.692 Initialization complete. Launching workers. 00:25:32.692 ======================================================== 00:25:32.692 Latency(us) 00:25:32.692 Device Information : IOPS MiB/s Average min max 00:25:32.692 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6863.00 26.81 145.51 47.39 4079.01 00:25:32.692 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5324.00 20.80 187.64 64.09 4103.02 00:25:32.692 ======================================================== 00:25:32.692 Total : 12187.00 47.61 163.92 47.39 4103.02 00:25:32.692 00:25:32.692 17:28:29 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:32.692 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.983 Initializing NVMe Controllers 00:25:35.983 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:35.983 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:35.983 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:35.983 Initialization complete. Launching workers. 00:25:35.983 ======================================================== 00:25:35.983 Latency(us) 00:25:35.983 Device Information : IOPS MiB/s Average min max 00:25:35.983 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19333.56 75.52 1654.93 464.12 7992.70 00:25:35.983 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3933.88 15.37 8133.93 5907.82 16155.48 00:25:35.983 ======================================================== 00:25:35.983 Total : 23267.44 90.89 2750.35 464.12 16155.48 00:25:35.983 00:25:35.983 17:28:32 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:25:35.983 17:28:32 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:35.983 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.277 Initializing NVMe Controllers 00:25:40.277 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:40.277 Controller IO queue size 128, less than required. 00:25:40.277 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.277 Controller IO queue size 128, less than required. 00:25:40.277 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.277 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:40.277 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:40.277 Initialization complete. Launching workers. 00:25:40.277 ======================================================== 00:25:40.277 Latency(us) 00:25:40.277 Device Information : IOPS MiB/s Average min max 00:25:40.277 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4112.68 1028.17 31237.25 10462.35 69551.15 00:25:40.277 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4161.16 1040.29 30588.37 14413.35 50228.16 00:25:40.277 ======================================================== 00:25:40.277 Total : 8273.84 2068.46 30910.91 10462.35 69551.15 00:25:40.277 00:25:40.277 17:28:36 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:25:40.277 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.548 No valid NVMe controllers or AIO or URING devices found 00:25:40.808 Initializing NVMe Controllers 00:25:40.808 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:40.808 Controller IO queue size 128, less than required. 00:25:40.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.808 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:40.808 Controller IO queue size 128, less than required. 00:25:40.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.808 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:40.808 WARNING: Some requested NVMe devices were skipped 00:25:40.808 17:28:37 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:25:40.808 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.004 Initializing NVMe Controllers 00:25:45.004 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:45.004 Controller IO queue size 128, less than required. 00:25:45.004 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:45.004 Controller IO queue size 128, less than required. 00:25:45.004 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:45.004 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:45.004 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:45.004 Initialization complete. Launching workers. 00:25:45.004 00:25:45.004 ==================== 00:25:45.004 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:45.004 RDMA transport: 00:25:45.004 dev name: mlx5_0 00:25:45.004 polls: 417916 00:25:45.004 idle_polls: 414082 00:25:45.004 completions: 46513 00:25:45.004 queued_requests: 1 00:25:45.004 total_send_wrs: 23320 00:25:45.004 send_doorbell_updates: 3630 00:25:45.004 total_recv_wrs: 23320 00:25:45.004 recv_doorbell_updates: 3630 00:25:45.004 --------------------------------- 00:25:45.004 00:25:45.004 ==================== 00:25:45.004 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:45.004 RDMA transport: 00:25:45.004 dev name: mlx5_0 00:25:45.004 polls: 416668 00:25:45.004 idle_polls: 416386 00:25:45.004 completions: 20563 00:25:45.004 queued_requests: 1 00:25:45.004 total_send_wrs: 10345 00:25:45.004 send_doorbell_updates: 254 00:25:45.004 total_recv_wrs: 10345 00:25:45.004 recv_doorbell_updates: 254 00:25:45.004 --------------------------------- 00:25:45.004 ======================================================== 00:25:45.004 Latency(us) 00:25:45.004 Device Information : IOPS MiB/s Average min max 00:25:45.004 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5862.00 1465.50 21906.44 10892.45 57799.35 00:25:45.004 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2618.00 654.50 48936.79 29408.12 71976.54 00:25:45.004 ======================================================== 00:25:45.004 Total : 8480.00 2120.00 30251.43 10892.45 71976.54 00:25:45.004 00:25:45.004 17:28:41 -- host/perf.sh@66 -- # sync 00:25:45.004 17:28:41 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.263 17:28:41 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:45.263 17:28:41 -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:25:45.263 17:28:41 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:53.388 17:28:48 -- host/perf.sh@72 -- # ls_guid=034f3258-56c1-43e8-82df-d9e5c51bf10a 00:25:53.388 17:28:48 -- host/perf.sh@73 -- # get_lvs_free_mb 034f3258-56c1-43e8-82df-d9e5c51bf10a 00:25:53.388 17:28:48 -- common/autotest_common.sh@1353 -- # local lvs_uuid=034f3258-56c1-43e8-82df-d9e5c51bf10a 00:25:53.388 17:28:48 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:53.388 17:28:48 -- common/autotest_common.sh@1355 -- # local fc 00:25:53.388 17:28:48 -- common/autotest_common.sh@1356 -- # local cs 00:25:53.388 17:28:48 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:53.388 17:28:48 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:53.388 { 00:25:53.388 "uuid": "034f3258-56c1-43e8-82df-d9e5c51bf10a", 00:25:53.388 "name": "lvs_0", 00:25:53.388 "base_bdev": "Nvme0n1", 00:25:53.388 "total_data_clusters": 476466, 00:25:53.388 "free_clusters": 476466, 00:25:53.388 "block_size": 512, 00:25:53.388 "cluster_size": 4194304 00:25:53.388 } 00:25:53.388 ]' 00:25:53.388 17:28:48 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="034f3258-56c1-43e8-82df-d9e5c51bf10a") .free_clusters' 00:25:53.388 17:28:48 -- common/autotest_common.sh@1358 -- # fc=476466 00:25:53.388 17:28:48 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="034f3258-56c1-43e8-82df-d9e5c51bf10a") .cluster_size' 00:25:53.388 17:28:48 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:53.388 17:28:48 -- common/autotest_common.sh@1362 -- # free_mb=1905864 00:25:53.388 17:28:48 -- common/autotest_common.sh@1363 -- # echo 1905864 00:25:53.388 1905864 00:25:53.388 17:28:48 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:25:53.388 17:28:48 -- host/perf.sh@78 -- # free_mb=20480 00:25:53.388 17:28:48 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 034f3258-56c1-43e8-82df-d9e5c51bf10a lbd_0 20480 00:25:53.388 17:28:49 -- host/perf.sh@80 -- # lb_guid=4838a4be-3593-40cd-a26e-d57b6d144fc4 00:25:53.388 17:28:49 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4838a4be-3593-40cd-a26e-d57b6d144fc4 lvs_n_0 00:25:53.647 17:28:50 -- host/perf.sh@83 -- # ls_nested_guid=28f7cb23-4d43-4a9f-926a-b8dbe71bb525 00:25:53.647 17:28:50 -- host/perf.sh@84 -- # get_lvs_free_mb 28f7cb23-4d43-4a9f-926a-b8dbe71bb525 00:25:53.647 17:28:50 -- common/autotest_common.sh@1353 -- # local lvs_uuid=28f7cb23-4d43-4a9f-926a-b8dbe71bb525 00:25:53.647 17:28:50 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:53.647 17:28:50 -- common/autotest_common.sh@1355 -- # local fc 00:25:53.647 17:28:50 -- common/autotest_common.sh@1356 -- # local cs 00:25:53.647 17:28:50 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:53.906 17:28:50 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:53.906 { 00:25:53.906 "uuid": "034f3258-56c1-43e8-82df-d9e5c51bf10a", 00:25:53.906 "name": "lvs_0", 00:25:53.906 "base_bdev": "Nvme0n1", 00:25:53.906 "total_data_clusters": 476466, 00:25:53.906 "free_clusters": 471346, 00:25:53.906 "block_size": 512, 00:25:53.906 "cluster_size": 4194304 00:25:53.906 }, 00:25:53.906 { 00:25:53.906 "uuid": "28f7cb23-4d43-4a9f-926a-b8dbe71bb525", 00:25:53.906 "name": "lvs_n_0", 00:25:53.906 "base_bdev": "4838a4be-3593-40cd-a26e-d57b6d144fc4", 00:25:53.906 "total_data_clusters": 5114, 00:25:53.906 "free_clusters": 5114, 00:25:53.906 "block_size": 512, 00:25:53.906 "cluster_size": 4194304 00:25:53.906 } 00:25:53.906 ]' 00:25:53.906 17:28:50 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="28f7cb23-4d43-4a9f-926a-b8dbe71bb525") .free_clusters' 00:25:53.906 17:28:50 -- common/autotest_common.sh@1358 -- # fc=5114 00:25:53.906 17:28:50 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="28f7cb23-4d43-4a9f-926a-b8dbe71bb525") .cluster_size' 00:25:53.906 17:28:50 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:53.906 17:28:50 -- common/autotest_common.sh@1362 -- # free_mb=20456 00:25:53.906 17:28:50 -- common/autotest_common.sh@1363 -- # echo 20456 00:25:53.906 20456 00:25:53.906 17:28:50 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:53.906 17:28:50 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 28f7cb23-4d43-4a9f-926a-b8dbe71bb525 lbd_nest_0 20456 00:25:54.165 17:28:50 -- host/perf.sh@88 -- # lb_nested_guid=e55638ac-bc23-4803-bf20-37f95da8284c 00:25:54.165 17:28:50 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:54.424 17:28:50 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:54.424 17:28:50 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e55638ac-bc23-4803-bf20-37f95da8284c 00:25:54.683 17:28:51 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:54.683 17:28:51 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:54.683 17:28:51 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:54.683 17:28:51 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:54.683 17:28:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:54.683 17:28:51 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:54.941 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.153 Initializing NVMe Controllers 00:26:07.153 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:07.153 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:07.153 Initialization complete. Launching workers. 00:26:07.153 ======================================================== 00:26:07.153 Latency(us) 00:26:07.153 Device Information : IOPS MiB/s Average min max 00:26:07.153 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5926.98 2.89 168.31 67.64 8064.20 00:26:07.153 ======================================================== 00:26:07.154 Total : 5926.98 2.89 168.31 67.64 8064.20 00:26:07.154 00:26:07.154 17:29:02 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:07.154 17:29:02 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:07.154 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.368 Initializing NVMe Controllers 00:26:19.368 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:19.368 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:19.368 Initialization complete. Launching workers. 00:26:19.368 ======================================================== 00:26:19.368 Latency(us) 00:26:19.368 Device Information : IOPS MiB/s Average min max 00:26:19.368 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2667.00 333.37 374.44 153.80 7022.28 00:26:19.368 ======================================================== 00:26:19.368 Total : 2667.00 333.37 374.44 153.80 7022.28 00:26:19.368 00:26:19.368 17:29:14 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:19.368 17:29:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:19.368 17:29:14 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:19.368 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.351 Initializing NVMe Controllers 00:26:29.351 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:29.351 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:29.351 Initialization complete. Launching workers. 00:26:29.351 ======================================================== 00:26:29.351 Latency(us) 00:26:29.351 Device Information : IOPS MiB/s Average min max 00:26:29.351 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12318.30 6.01 2597.63 884.51 7434.35 00:26:29.351 ======================================================== 00:26:29.351 Total : 12318.30 6.01 2597.63 884.51 7434.35 00:26:29.351 00:26:29.351 17:29:25 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:29.351 17:29:25 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:29.351 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.567 Initializing NVMe Controllers 00:26:41.567 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:41.567 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:41.567 Initialization complete. Launching workers. 00:26:41.567 ======================================================== 00:26:41.567 Latency(us) 00:26:41.567 Device Information : IOPS MiB/s Average min max 00:26:41.567 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3965.90 495.74 8072.94 3932.72 16021.18 00:26:41.567 ======================================================== 00:26:41.567 Total : 3965.90 495.74 8072.94 3932.72 16021.18 00:26:41.567 00:26:41.567 17:29:36 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:41.567 17:29:36 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:41.567 17:29:36 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:41.567 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.552 Initializing NVMe Controllers 00:26:51.552 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:51.552 Controller IO queue size 128, less than required. 00:26:51.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.552 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:51.552 Initialization complete. Launching workers. 00:26:51.552 ======================================================== 00:26:51.552 Latency(us) 00:26:51.552 Device Information : IOPS MiB/s Average min max 00:26:51.552 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19586.10 9.56 6537.80 1847.98 16703.84 00:26:51.552 ======================================================== 00:26:51.552 Total : 19586.10 9.56 6537.80 1847.98 16703.84 00:26:51.552 00:26:51.552 17:29:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:51.552 17:29:48 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:51.552 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.770 Initializing NVMe Controllers 00:27:03.770 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:03.770 Controller IO queue size 128, less than required. 00:27:03.770 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:03.770 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:03.770 Initialization complete. Launching workers. 00:27:03.770 ======================================================== 00:27:03.770 Latency(us) 00:27:03.770 Device Information : IOPS MiB/s Average min max 00:27:03.770 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11313.25 1414.16 11316.06 3193.83 23531.60 00:27:03.770 ======================================================== 00:27:03.770 Total : 11313.25 1414.16 11316.06 3193.83 23531.60 00:27:03.770 00:27:03.770 17:29:59 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:03.770 17:29:59 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e55638ac-bc23-4803-bf20-37f95da8284c 00:27:03.770 17:30:00 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:04.030 17:30:00 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4838a4be-3593-40cd-a26e-d57b6d144fc4 00:27:04.290 17:30:00 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:04.290 17:30:00 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:04.290 17:30:00 -- host/perf.sh@114 -- # nvmftestfini 00:27:04.290 17:30:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:04.290 17:30:00 -- nvmf/common.sh@116 -- # sync 00:27:04.291 17:30:00 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:04.291 17:30:00 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:04.291 17:30:00 -- nvmf/common.sh@119 -- # set +e 00:27:04.291 17:30:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:04.291 17:30:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:04.291 rmmod nvme_rdma 00:27:04.291 rmmod nvme_fabrics 00:27:04.291 17:30:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:04.291 17:30:00 -- nvmf/common.sh@123 -- # set -e 00:27:04.291 17:30:00 -- nvmf/common.sh@124 -- # return 0 00:27:04.291 17:30:00 -- nvmf/common.sh@477 -- # '[' -n 1460233 ']' 00:27:04.291 17:30:00 -- nvmf/common.sh@478 -- # killprocess 1460233 00:27:04.291 17:30:00 -- common/autotest_common.sh@936 -- # '[' -z 1460233 ']' 00:27:04.291 17:30:00 -- common/autotest_common.sh@940 -- # kill -0 1460233 00:27:04.291 17:30:00 -- common/autotest_common.sh@941 -- # uname 00:27:04.291 17:30:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:04.291 17:30:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1460233 00:27:04.550 17:30:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:04.551 17:30:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:04.551 17:30:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1460233' 00:27:04.551 killing process with pid 1460233 00:27:04.551 17:30:01 -- common/autotest_common.sh@955 -- # kill 1460233 00:27:04.551 17:30:01 -- common/autotest_common.sh@960 -- # wait 1460233 00:27:07.090 17:30:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:07.090 17:30:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:07.090 00:27:07.090 real 1m51.688s 00:27:07.090 user 7m1.369s 00:27:07.090 sys 0m7.322s 00:27:07.090 17:30:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:07.090 17:30:03 -- common/autotest_common.sh@10 -- # set +x 00:27:07.090 ************************************ 00:27:07.090 END TEST nvmf_perf 00:27:07.090 ************************************ 00:27:07.090 17:30:03 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:07.090 17:30:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:07.090 17:30:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:07.090 17:30:03 -- common/autotest_common.sh@10 -- # set +x 00:27:07.090 ************************************ 00:27:07.090 START TEST nvmf_fio_host 00:27:07.090 ************************************ 00:27:07.090 17:30:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:07.090 * Looking for test storage... 00:27:07.090 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:07.090 17:30:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:07.090 17:30:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:07.090 17:30:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:07.350 17:30:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:07.350 17:30:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:07.350 17:30:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:07.350 17:30:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:07.350 17:30:03 -- scripts/common.sh@335 -- # IFS=.-: 00:27:07.350 17:30:03 -- scripts/common.sh@335 -- # read -ra ver1 00:27:07.350 17:30:03 -- scripts/common.sh@336 -- # IFS=.-: 00:27:07.350 17:30:03 -- scripts/common.sh@336 -- # read -ra ver2 00:27:07.350 17:30:03 -- scripts/common.sh@337 -- # local 'op=<' 00:27:07.350 17:30:03 -- scripts/common.sh@339 -- # ver1_l=2 00:27:07.350 17:30:03 -- scripts/common.sh@340 -- # ver2_l=1 00:27:07.350 17:30:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:07.350 17:30:03 -- scripts/common.sh@343 -- # case "$op" in 00:27:07.350 17:30:03 -- scripts/common.sh@344 -- # : 1 00:27:07.350 17:30:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:07.350 17:30:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:07.350 17:30:03 -- scripts/common.sh@364 -- # decimal 1 00:27:07.350 17:30:03 -- scripts/common.sh@352 -- # local d=1 00:27:07.350 17:30:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:07.350 17:30:03 -- scripts/common.sh@354 -- # echo 1 00:27:07.350 17:30:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:07.350 17:30:03 -- scripts/common.sh@365 -- # decimal 2 00:27:07.350 17:30:03 -- scripts/common.sh@352 -- # local d=2 00:27:07.350 17:30:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:07.350 17:30:03 -- scripts/common.sh@354 -- # echo 2 00:27:07.350 17:30:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:07.351 17:30:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:07.351 17:30:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:07.351 17:30:03 -- scripts/common.sh@367 -- # return 0 00:27:07.351 17:30:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:07.351 17:30:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:07.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.351 --rc genhtml_branch_coverage=1 00:27:07.351 --rc genhtml_function_coverage=1 00:27:07.351 --rc genhtml_legend=1 00:27:07.351 --rc geninfo_all_blocks=1 00:27:07.351 --rc geninfo_unexecuted_blocks=1 00:27:07.351 00:27:07.351 ' 00:27:07.351 17:30:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:07.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.351 --rc genhtml_branch_coverage=1 00:27:07.351 --rc genhtml_function_coverage=1 00:27:07.351 --rc genhtml_legend=1 00:27:07.351 --rc geninfo_all_blocks=1 00:27:07.351 --rc geninfo_unexecuted_blocks=1 00:27:07.351 00:27:07.351 ' 00:27:07.351 17:30:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:07.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.351 --rc genhtml_branch_coverage=1 00:27:07.351 --rc genhtml_function_coverage=1 00:27:07.351 --rc genhtml_legend=1 00:27:07.351 --rc geninfo_all_blocks=1 00:27:07.351 --rc geninfo_unexecuted_blocks=1 00:27:07.351 00:27:07.351 ' 00:27:07.351 17:30:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:07.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.351 --rc genhtml_branch_coverage=1 00:27:07.351 --rc genhtml_function_coverage=1 00:27:07.351 --rc genhtml_legend=1 00:27:07.351 --rc geninfo_all_blocks=1 00:27:07.351 --rc geninfo_unexecuted_blocks=1 00:27:07.351 00:27:07.351 ' 00:27:07.351 17:30:03 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:07.351 17:30:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.351 17:30:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.351 17:30:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.351 17:30:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.351 17:30:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.351 17:30:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.351 17:30:03 -- paths/export.sh@5 -- # export PATH 00:27:07.351 17:30:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.351 17:30:03 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.351 17:30:03 -- nvmf/common.sh@7 -- # uname -s 00:27:07.351 17:30:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.351 17:30:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.351 17:30:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.351 17:30:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.351 17:30:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.351 17:30:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.351 17:30:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.351 17:30:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.351 17:30:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.351 17:30:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.351 17:30:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:07.351 17:30:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:07.351 17:30:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.351 17:30:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.351 17:30:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.351 17:30:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:07.351 17:30:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.351 17:30:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.351 17:30:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.351 17:30:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.351 17:30:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.351 17:30:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.351 17:30:03 -- paths/export.sh@5 -- # export PATH 00:27:07.351 17:30:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.351 17:30:03 -- nvmf/common.sh@46 -- # : 0 00:27:07.351 17:30:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:07.351 17:30:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:07.351 17:30:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:07.351 17:30:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.351 17:30:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.351 17:30:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:07.351 17:30:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:07.351 17:30:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:07.351 17:30:03 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:07.351 17:30:03 -- host/fio.sh@14 -- # nvmftestinit 00:27:07.351 17:30:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:07.351 17:30:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.351 17:30:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:07.351 17:30:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:07.351 17:30:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:07.351 17:30:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.351 17:30:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.351 17:30:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.351 17:30:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:07.351 17:30:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:07.351 17:30:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:07.351 17:30:03 -- common/autotest_common.sh@10 -- # set +x 00:27:13.928 17:30:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:13.928 17:30:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:13.928 17:30:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:13.928 17:30:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:13.928 17:30:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:13.928 17:30:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:13.928 17:30:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:13.928 17:30:10 -- nvmf/common.sh@294 -- # net_devs=() 00:27:13.928 17:30:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:13.928 17:30:10 -- nvmf/common.sh@295 -- # e810=() 00:27:13.928 17:30:10 -- nvmf/common.sh@295 -- # local -ga e810 00:27:13.928 17:30:10 -- nvmf/common.sh@296 -- # x722=() 00:27:13.928 17:30:10 -- nvmf/common.sh@296 -- # local -ga x722 00:27:13.928 17:30:10 -- nvmf/common.sh@297 -- # mlx=() 00:27:13.928 17:30:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:13.928 17:30:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.929 17:30:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.929 17:30:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.929 17:30:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.929 17:30:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.929 17:30:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.929 17:30:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.929 17:30:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.929 17:30:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.929 17:30:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.929 17:30:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.929 17:30:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:13.929 17:30:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:13.929 17:30:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:13.929 17:30:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:13.929 17:30:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:13.929 17:30:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:13.929 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:13.929 17:30:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:13.929 17:30:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:13.929 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:13.929 17:30:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:13.929 17:30:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:13.929 17:30:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.929 17:30:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:13.929 17:30:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.929 17:30:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:13.929 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:13.929 17:30:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.929 17:30:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.929 17:30:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:13.929 17:30:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.929 17:30:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:13.929 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:13.929 17:30:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.929 17:30:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:13.929 17:30:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:13.929 17:30:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:13.929 17:30:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:13.929 17:30:10 -- nvmf/common.sh@57 -- # uname 00:27:13.929 17:30:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:13.929 17:30:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:13.929 17:30:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:13.929 17:30:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:13.929 17:30:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:13.929 17:30:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:13.929 17:30:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:13.929 17:30:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:13.929 17:30:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:13.929 17:30:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:13.929 17:30:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:13.929 17:30:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:13.929 17:30:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:13.929 17:30:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:13.929 17:30:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:13.929 17:30:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:13.929 17:30:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:13.929 17:30:10 -- nvmf/common.sh@104 -- # continue 2 00:27:13.929 17:30:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:13.929 17:30:10 -- nvmf/common.sh@104 -- # continue 2 00:27:13.929 17:30:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:13.929 17:30:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:13.929 17:30:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:13.929 17:30:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:13.929 17:30:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:13.929 17:30:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:13.929 17:30:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:13.929 17:30:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:13.929 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:13.929 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:13.929 altname enp217s0f0np0 00:27:13.929 altname ens818f0np0 00:27:13.929 inet 192.168.100.8/24 scope global mlx_0_0 00:27:13.929 valid_lft forever preferred_lft forever 00:27:13.929 17:30:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:13.929 17:30:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:13.929 17:30:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:13.929 17:30:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:13.929 17:30:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:13.929 17:30:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:13.929 17:30:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:13.929 17:30:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:13.929 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:13.929 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:13.929 altname enp217s0f1np1 00:27:13.929 altname ens818f1np1 00:27:13.929 inet 192.168.100.9/24 scope global mlx_0_1 00:27:13.929 valid_lft forever preferred_lft forever 00:27:13.929 17:30:10 -- nvmf/common.sh@410 -- # return 0 00:27:13.929 17:30:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:13.929 17:30:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:13.929 17:30:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:13.929 17:30:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:13.929 17:30:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:13.929 17:30:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:13.929 17:30:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:13.929 17:30:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:13.929 17:30:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:13.929 17:30:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:13.929 17:30:10 -- nvmf/common.sh@104 -- # continue 2 00:27:13.929 17:30:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:13.929 17:30:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:13.929 17:30:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:13.929 17:30:10 -- nvmf/common.sh@104 -- # continue 2 00:27:13.929 17:30:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:13.929 17:30:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:13.929 17:30:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:13.929 17:30:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:13.929 17:30:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:13.929 17:30:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:13.929 17:30:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:13.929 17:30:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:13.929 17:30:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:13.929 17:30:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:13.929 17:30:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:13.929 17:30:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:13.929 17:30:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:13.929 192.168.100.9' 00:27:13.929 17:30:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:13.929 192.168.100.9' 00:27:13.929 17:30:10 -- nvmf/common.sh@445 -- # head -n 1 00:27:13.929 17:30:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:13.929 17:30:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:13.929 192.168.100.9' 00:27:13.929 17:30:10 -- nvmf/common.sh@446 -- # tail -n +2 00:27:13.929 17:30:10 -- nvmf/common.sh@446 -- # head -n 1 00:27:13.929 17:30:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:13.930 17:30:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:13.930 17:30:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:13.930 17:30:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:13.930 17:30:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:13.930 17:30:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:13.930 17:30:10 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:13.930 17:30:10 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:13.930 17:30:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:13.930 17:30:10 -- common/autotest_common.sh@10 -- # set +x 00:27:13.930 17:30:10 -- host/fio.sh@24 -- # nvmfpid=1481550 00:27:13.930 17:30:10 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:13.930 17:30:10 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:13.930 17:30:10 -- host/fio.sh@28 -- # waitforlisten 1481550 00:27:13.930 17:30:10 -- common/autotest_common.sh@829 -- # '[' -z 1481550 ']' 00:27:13.930 17:30:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.930 17:30:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.930 17:30:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.930 17:30:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.930 17:30:10 -- common/autotest_common.sh@10 -- # set +x 00:27:13.930 [2024-12-14 17:30:10.427706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:13.930 [2024-12-14 17:30:10.427758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.930 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.930 [2024-12-14 17:30:10.499013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.930 [2024-12-14 17:30:10.537341] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:13.930 [2024-12-14 17:30:10.537469] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.930 [2024-12-14 17:30:10.537479] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.930 [2024-12-14 17:30:10.537488] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.930 [2024-12-14 17:30:10.537535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.930 [2024-12-14 17:30:10.537650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.930 [2024-12-14 17:30:10.537733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:13.930 [2024-12-14 17:30:10.537735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.868 17:30:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.868 17:30:11 -- common/autotest_common.sh@862 -- # return 0 00:27:14.868 17:30:11 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:14.868 [2024-12-14 17:30:11.444003] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c2e0d0/0x1c325a0) succeed. 00:27:14.868 [2024-12-14 17:30:11.453334] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c2f670/0x1c73c40) succeed. 00:27:15.127 17:30:11 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:15.127 17:30:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:15.127 17:30:11 -- common/autotest_common.sh@10 -- # set +x 00:27:15.127 17:30:11 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:15.387 Malloc1 00:27:15.387 17:30:11 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.387 17:30:12 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:15.646 17:30:12 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:15.906 [2024-12-14 17:30:12.362288] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:15.906 17:30:12 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:15.906 17:30:12 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:15.906 17:30:12 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:15.906 17:30:12 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:15.906 17:30:12 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:15.906 17:30:12 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:15.906 17:30:12 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:15.906 17:30:12 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.906 17:30:12 -- common/autotest_common.sh@1330 -- # shift 00:27:15.906 17:30:12 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:15.906 17:30:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.906 17:30:12 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.906 17:30:12 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:15.906 17:30:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:16.197 17:30:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:16.197 17:30:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:16.197 17:30:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.197 17:30:12 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.197 17:30:12 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:16.197 17:30:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:16.197 17:30:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:16.197 17:30:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:16.197 17:30:12 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:16.197 17:30:12 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:16.461 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:16.461 fio-3.35 00:27:16.461 Starting 1 thread 00:27:16.461 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.998 00:27:18.998 test: (groupid=0, jobs=1): err= 0: pid=1482242: Sat Dec 14 17:30:15 2024 00:27:18.998 read: IOPS=19.0k, BW=74.1MiB/s (77.7MB/s)(148MiB/2003msec) 00:27:18.998 slat (nsec): min=1331, max=28065, avg=1475.14, stdev=457.48 00:27:18.998 clat (usec): min=1834, max=6129, avg=3351.32, stdev=75.74 00:27:18.998 lat (usec): min=1856, max=6130, avg=3352.79, stdev=75.69 00:27:18.998 clat percentiles (usec): 00:27:18.998 | 1.00th=[ 3326], 5.00th=[ 3326], 10.00th=[ 3326], 20.00th=[ 3326], 00:27:18.998 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:27:18.998 | 70.00th=[ 3359], 80.00th=[ 3359], 90.00th=[ 3359], 95.00th=[ 3359], 00:27:18.998 | 99.00th=[ 3392], 99.50th=[ 3523], 99.90th=[ 4424], 99.95th=[ 5276], 00:27:18.998 | 99.99th=[ 6128] 00:27:18.998 bw ( KiB/s): min=74312, max=76464, per=99.97%, avg=75854.00, stdev=1030.57, samples=4 00:27:18.998 iops : min=18578, max=19116, avg=18963.50, stdev=257.64, samples=4 00:27:18.998 write: IOPS=19.0k, BW=74.1MiB/s (77.7MB/s)(148MiB/2003msec); 0 zone resets 00:27:18.998 slat (nsec): min=1370, max=17327, avg=1558.03, stdev=476.22 00:27:18.998 clat (usec): min=2527, max=6101, avg=3349.75, stdev=72.42 00:27:18.998 lat (usec): min=2533, max=6102, avg=3351.31, stdev=72.36 00:27:18.998 clat percentiles (usec): 00:27:18.998 | 1.00th=[ 3326], 5.00th=[ 3326], 10.00th=[ 3326], 20.00th=[ 3326], 00:27:18.998 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:27:18.998 | 70.00th=[ 3359], 80.00th=[ 3359], 90.00th=[ 3359], 95.00th=[ 3359], 00:27:18.998 | 99.00th=[ 3392], 99.50th=[ 3523], 99.90th=[ 4047], 99.95th=[ 5276], 00:27:18.998 | 99.99th=[ 5735] 00:27:18.998 bw ( KiB/s): min=74376, max=76472, per=99.99%, avg=75882.00, stdev=1005.97, samples=4 00:27:18.998 iops : min=18594, max=19118, avg=18970.50, stdev=251.49, samples=4 00:27:18.998 lat (msec) : 2=0.01%, 4=99.89%, 10=0.11% 00:27:18.998 cpu : usr=99.45%, sys=0.15%, ctx=15, majf=0, minf=2 00:27:18.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:18.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:18.998 issued rwts: total=37995,38000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:18.998 00:27:18.998 Run status group 0 (all jobs): 00:27:18.998 READ: bw=74.1MiB/s (77.7MB/s), 74.1MiB/s-74.1MiB/s (77.7MB/s-77.7MB/s), io=148MiB (156MB), run=2003-2003msec 00:27:18.998 WRITE: bw=74.1MiB/s (77.7MB/s), 74.1MiB/s-74.1MiB/s (77.7MB/s-77.7MB/s), io=148MiB (156MB), run=2003-2003msec 00:27:18.998 17:30:15 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:18.998 17:30:15 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:18.998 17:30:15 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:18.998 17:30:15 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:18.998 17:30:15 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:18.998 17:30:15 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.999 17:30:15 -- common/autotest_common.sh@1330 -- # shift 00:27:18.999 17:30:15 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:18.999 17:30:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.999 17:30:15 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.999 17:30:15 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:18.999 17:30:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:18.999 17:30:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:18.999 17:30:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:18.999 17:30:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.999 17:30:15 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.999 17:30:15 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:18.999 17:30:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:18.999 17:30:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:18.999 17:30:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:18.999 17:30:15 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:18.999 17:30:15 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:18.999 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:18.999 fio-3.35 00:27:18.999 Starting 1 thread 00:27:18.999 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.538 00:27:21.538 test: (groupid=0, jobs=1): err= 0: pid=1482755: Sat Dec 14 17:30:17 2024 00:27:21.538 read: IOPS=15.0k, BW=235MiB/s (246MB/s)(461MiB/1963msec) 00:27:21.538 slat (usec): min=2, max=109, avg= 2.63, stdev= 1.38 00:27:21.538 clat (usec): min=457, max=9986, avg=1596.46, stdev=1304.71 00:27:21.538 lat (usec): min=460, max=9993, avg=1599.08, stdev=1305.28 00:27:21.538 clat percentiles (usec): 00:27:21.538 | 1.00th=[ 652], 5.00th=[ 742], 10.00th=[ 799], 20.00th=[ 881], 00:27:21.538 | 30.00th=[ 955], 40.00th=[ 1037], 50.00th=[ 1139], 60.00th=[ 1254], 00:27:21.538 | 70.00th=[ 1401], 80.00th=[ 1614], 90.00th=[ 3949], 95.00th=[ 4752], 00:27:21.538 | 99.00th=[ 6521], 99.50th=[ 7111], 99.90th=[ 8586], 99.95th=[ 9241], 00:27:21.538 | 99.99th=[ 9896] 00:27:21.538 bw ( KiB/s): min=105408, max=121664, per=48.28%, avg=116036.75, stdev=7239.61, samples=4 00:27:21.538 iops : min= 6588, max= 7604, avg=7252.25, stdev=452.46, samples=4 00:27:21.538 write: IOPS=8503, BW=133MiB/s (139MB/s)(236MiB/1774msec); 0 zone resets 00:27:21.538 slat (usec): min=26, max=130, avg=28.88, stdev= 5.62 00:27:21.538 clat (usec): min=4022, max=19077, avg=12025.65, stdev=1847.82 00:27:21.538 lat (usec): min=4049, max=19105, avg=12054.53, stdev=1847.63 00:27:21.538 clat percentiles (usec): 00:27:21.538 | 1.00th=[ 6390], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10683], 00:27:21.538 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:27:21.538 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14353], 95.00th=[15008], 00:27:21.538 | 99.00th=[16712], 99.50th=[17433], 99.90th=[18220], 99.95th=[18744], 00:27:21.538 | 99.99th=[18744] 00:27:21.538 bw ( KiB/s): min=109344, max=125728, per=88.07%, avg=119819.75, stdev=7313.51, samples=4 00:27:21.538 iops : min= 6834, max= 7858, avg=7488.50, stdev=457.07, samples=4 00:27:21.538 lat (usec) : 500=0.01%, 750=3.72%, 1000=19.95% 00:27:21.538 lat (msec) : 2=33.09%, 4=2.79%, 10=10.60%, 20=29.84% 00:27:21.538 cpu : usr=95.91%, sys=2.05%, ctx=226, majf=0, minf=1 00:27:21.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:21.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:21.538 issued rwts: total=29489,15085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:21.538 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:21.538 00:27:21.538 Run status group 0 (all jobs): 00:27:21.538 READ: bw=235MiB/s (246MB/s), 235MiB/s-235MiB/s (246MB/s-246MB/s), io=461MiB (483MB), run=1963-1963msec 00:27:21.538 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=236MiB (247MB), run=1774-1774msec 00:27:21.538 17:30:18 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:21.538 17:30:18 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:21.538 17:30:18 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:21.538 17:30:18 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:21.538 17:30:18 -- common/autotest_common.sh@1508 -- # bdfs=() 00:27:21.538 17:30:18 -- common/autotest_common.sh@1508 -- # local bdfs 00:27:21.538 17:30:18 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:21.538 17:30:18 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:21.538 17:30:18 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:27:21.798 17:30:18 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:27:21.798 17:30:18 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:27:21.798 17:30:18 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:27:25.133 Nvme0n1 00:27:25.133 17:30:21 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:30.436 17:30:26 -- host/fio.sh@53 -- # ls_guid=876b81a4-dd35-4d50-8831-64271b700a70 00:27:30.436 17:30:26 -- host/fio.sh@54 -- # get_lvs_free_mb 876b81a4-dd35-4d50-8831-64271b700a70 00:27:30.436 17:30:26 -- common/autotest_common.sh@1353 -- # local lvs_uuid=876b81a4-dd35-4d50-8831-64271b700a70 00:27:30.436 17:30:26 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:30.436 17:30:26 -- common/autotest_common.sh@1355 -- # local fc 00:27:30.436 17:30:26 -- common/autotest_common.sh@1356 -- # local cs 00:27:30.436 17:30:26 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:30.436 17:30:27 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:30.436 { 00:27:30.436 "uuid": "876b81a4-dd35-4d50-8831-64271b700a70", 00:27:30.436 "name": "lvs_0", 00:27:30.436 "base_bdev": "Nvme0n1", 00:27:30.436 "total_data_clusters": 1862, 00:27:30.436 "free_clusters": 1862, 00:27:30.436 "block_size": 512, 00:27:30.436 "cluster_size": 1073741824 00:27:30.436 } 00:27:30.436 ]' 00:27:30.436 17:30:27 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="876b81a4-dd35-4d50-8831-64271b700a70") .free_clusters' 00:27:30.436 17:30:27 -- common/autotest_common.sh@1358 -- # fc=1862 00:27:30.436 17:30:27 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="876b81a4-dd35-4d50-8831-64271b700a70") .cluster_size' 00:27:30.436 17:30:27 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:27:30.436 17:30:27 -- common/autotest_common.sh@1362 -- # free_mb=1906688 00:27:30.436 17:30:27 -- common/autotest_common.sh@1363 -- # echo 1906688 00:27:30.436 1906688 00:27:30.436 17:30:27 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:27:31.005 af6d1f64-9a6a-4180-ac0e-f8a7f223b542 00:27:31.005 17:30:27 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:31.264 17:30:27 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:31.523 17:30:27 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:27:31.523 17:30:28 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:31.523 17:30:28 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:31.523 17:30:28 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:31.523 17:30:28 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:31.523 17:30:28 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:31.523 17:30:28 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:31.523 17:30:28 -- common/autotest_common.sh@1330 -- # shift 00:27:31.523 17:30:28 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:31.523 17:30:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.523 17:30:28 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:31.523 17:30:28 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:31.523 17:30:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:31.523 17:30:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:31.523 17:30:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:31.523 17:30:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.523 17:30:28 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:31.523 17:30:28 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:31.523 17:30:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:31.523 17:30:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:31.523 17:30:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:31.523 17:30:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:31.523 17:30:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:32.089 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:32.089 fio-3.35 00:27:32.089 Starting 1 thread 00:27:32.089 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.626 00:27:34.626 test: (groupid=0, jobs=1): err= 0: pid=1485066: Sat Dec 14 17:30:30 2024 00:27:34.626 read: IOPS=10.3k, BW=40.2MiB/s (42.2MB/s)(80.6MiB/2005msec) 00:27:34.626 slat (nsec): min=1333, max=17504, avg=1449.31, stdev=256.66 00:27:34.626 clat (usec): min=155, max=349594, avg=6173.21, stdev=19276.41 00:27:34.626 lat (usec): min=156, max=349597, avg=6174.66, stdev=19276.44 00:27:34.626 clat percentiles (msec): 00:27:34.626 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:34.626 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:34.626 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:34.626 | 99.00th=[ 6], 99.50th=[ 6], 99.90th=[ 351], 99.95th=[ 351], 00:27:34.626 | 99.99th=[ 351] 00:27:34.626 bw ( KiB/s): min=13992, max=50416, per=99.98%, avg=41158.00, stdev=18111.62, samples=4 00:27:34.626 iops : min= 3498, max=12604, avg=10289.50, stdev=4527.90, samples=4 00:27:34.626 write: IOPS=10.3k, BW=40.2MiB/s (42.2MB/s)(80.7MiB/2005msec); 0 zone resets 00:27:34.626 slat (nsec): min=1371, max=17519, avg=1569.74, stdev=332.29 00:27:34.626 clat (usec): min=151, max=349866, avg=6138.15, stdev=18730.96 00:27:34.626 lat (usec): min=152, max=349869, avg=6139.72, stdev=18731.02 00:27:34.626 clat percentiles (msec): 00:27:34.626 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:34.626 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:34.626 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:34.626 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 351], 99.95th=[ 351], 00:27:34.626 | 99.99th=[ 351] 00:27:34.626 bw ( KiB/s): min=14536, max=50256, per=99.90%, avg=41176.00, stdev=17760.93, samples=4 00:27:34.626 iops : min= 3634, max=12564, avg=10294.00, stdev=4440.23, samples=4 00:27:34.626 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:27:34.626 lat (msec) : 2=0.04%, 4=0.29%, 10=99.31%, 500=0.31% 00:27:34.626 cpu : usr=99.50%, sys=0.05%, ctx=14, majf=0, minf=2 00:27:34.626 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:34.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:34.626 issued rwts: total=20634,20660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:34.626 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:34.626 00:27:34.626 Run status group 0 (all jobs): 00:27:34.626 READ: bw=40.2MiB/s (42.2MB/s), 40.2MiB/s-40.2MiB/s (42.2MB/s-42.2MB/s), io=80.6MiB (84.5MB), run=2005-2005msec 00:27:34.626 WRITE: bw=40.2MiB/s (42.2MB/s), 40.2MiB/s-40.2MiB/s (42.2MB/s-42.2MB/s), io=80.7MiB (84.6MB), run=2005-2005msec 00:27:34.626 17:30:30 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:34.626 17:30:31 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:36.004 17:30:32 -- host/fio.sh@64 -- # ls_nested_guid=ca19b18d-0562-4462-a78b-cd2dbe31253c 00:27:36.004 17:30:32 -- host/fio.sh@65 -- # get_lvs_free_mb ca19b18d-0562-4462-a78b-cd2dbe31253c 00:27:36.004 17:30:32 -- common/autotest_common.sh@1353 -- # local lvs_uuid=ca19b18d-0562-4462-a78b-cd2dbe31253c 00:27:36.004 17:30:32 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:36.004 17:30:32 -- common/autotest_common.sh@1355 -- # local fc 00:27:36.004 17:30:32 -- common/autotest_common.sh@1356 -- # local cs 00:27:36.004 17:30:32 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:36.004 17:30:32 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:36.004 { 00:27:36.004 "uuid": "876b81a4-dd35-4d50-8831-64271b700a70", 00:27:36.004 "name": "lvs_0", 00:27:36.004 "base_bdev": "Nvme0n1", 00:27:36.004 "total_data_clusters": 1862, 00:27:36.004 "free_clusters": 0, 00:27:36.004 "block_size": 512, 00:27:36.004 "cluster_size": 1073741824 00:27:36.004 }, 00:27:36.004 { 00:27:36.004 "uuid": "ca19b18d-0562-4462-a78b-cd2dbe31253c", 00:27:36.004 "name": "lvs_n_0", 00:27:36.004 "base_bdev": "af6d1f64-9a6a-4180-ac0e-f8a7f223b542", 00:27:36.004 "total_data_clusters": 476206, 00:27:36.004 "free_clusters": 476206, 00:27:36.004 "block_size": 512, 00:27:36.004 "cluster_size": 4194304 00:27:36.004 } 00:27:36.004 ]' 00:27:36.004 17:30:32 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="ca19b18d-0562-4462-a78b-cd2dbe31253c") .free_clusters' 00:27:36.004 17:30:32 -- common/autotest_common.sh@1358 -- # fc=476206 00:27:36.004 17:30:32 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="ca19b18d-0562-4462-a78b-cd2dbe31253c") .cluster_size' 00:27:36.004 17:30:32 -- common/autotest_common.sh@1359 -- # cs=4194304 00:27:36.004 17:30:32 -- common/autotest_common.sh@1362 -- # free_mb=1904824 00:27:36.004 17:30:32 -- common/autotest_common.sh@1363 -- # echo 1904824 00:27:36.004 1904824 00:27:36.004 17:30:32 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:27:36.942 cec8bf32-dc2a-4b7c-82da-b02ac571247b 00:27:36.942 17:30:33 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:36.942 17:30:33 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:37.201 17:30:33 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:27:37.460 17:30:33 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:37.460 17:30:33 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:37.460 17:30:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:37.460 17:30:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:37.460 17:30:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:37.460 17:30:33 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:37.460 17:30:33 -- common/autotest_common.sh@1330 -- # shift 00:27:37.460 17:30:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:37.460 17:30:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.460 17:30:33 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:37.460 17:30:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:37.460 17:30:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:37.460 17:30:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:37.460 17:30:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:37.460 17:30:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.460 17:30:34 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:37.460 17:30:34 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:37.460 17:30:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:37.460 17:30:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:37.460 17:30:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:37.460 17:30:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:37.460 17:30:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:37.718 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:37.718 fio-3.35 00:27:37.718 Starting 1 thread 00:27:37.718 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.252 00:27:40.252 test: (groupid=0, jobs=1): err= 0: pid=1486191: Sat Dec 14 17:30:36 2024 00:27:40.252 read: IOPS=10.7k, BW=41.8MiB/s (43.9MB/s)(83.9MiB/2005msec) 00:27:40.252 slat (nsec): min=1349, max=15492, avg=1477.67, stdev=213.41 00:27:40.252 clat (usec): min=2588, max=10605, avg=5912.39, stdev=165.16 00:27:40.252 lat (usec): min=2591, max=10606, avg=5913.87, stdev=165.13 00:27:40.252 clat percentiles (usec): 00:27:40.252 | 1.00th=[ 5800], 5.00th=[ 5866], 10.00th=[ 5866], 20.00th=[ 5866], 00:27:40.252 | 30.00th=[ 5866], 40.00th=[ 5932], 50.00th=[ 5932], 60.00th=[ 5932], 00:27:40.252 | 70.00th=[ 5932], 80.00th=[ 5932], 90.00th=[ 5932], 95.00th=[ 5932], 00:27:40.252 | 99.00th=[ 5997], 99.50th=[ 5997], 99.90th=[ 8094], 99.95th=[ 9503], 00:27:40.252 | 99.99th=[10552] 00:27:40.252 bw ( KiB/s): min=41304, max=43520, per=99.94%, avg=42818.00, stdev=1029.95, samples=4 00:27:40.252 iops : min=10326, max=10880, avg=10704.50, stdev=257.49, samples=4 00:27:40.252 write: IOPS=10.7k, BW=41.8MiB/s (43.8MB/s)(83.7MiB/2005msec); 0 zone resets 00:27:40.252 slat (nsec): min=1382, max=17447, avg=1590.84, stdev=218.66 00:27:40.252 clat (usec): min=2584, max=10617, avg=5933.45, stdev=161.98 00:27:40.252 lat (usec): min=2588, max=10619, avg=5935.04, stdev=161.96 00:27:40.252 clat percentiles (usec): 00:27:40.252 | 1.00th=[ 5866], 5.00th=[ 5866], 10.00th=[ 5866], 20.00th=[ 5932], 00:27:40.252 | 30.00th=[ 5932], 40.00th=[ 5932], 50.00th=[ 5932], 60.00th=[ 5932], 00:27:40.252 | 70.00th=[ 5932], 80.00th=[ 5932], 90.00th=[ 5997], 95.00th=[ 5997], 00:27:40.252 | 99.00th=[ 5997], 99.50th=[ 6063], 99.90th=[ 8094], 99.95th=[ 9503], 00:27:40.252 | 99.99th=[10552] 00:27:40.252 bw ( KiB/s): min=41696, max=43168, per=99.96%, avg=42750.00, stdev=705.95, samples=4 00:27:40.252 iops : min=10424, max=10792, avg=10687.50, stdev=176.49, samples=4 00:27:40.252 lat (msec) : 4=0.04%, 10=99.94%, 20=0.02% 00:27:40.252 cpu : usr=99.55%, sys=0.10%, ctx=15, majf=0, minf=2 00:27:40.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:40.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:40.252 issued rwts: total=21475,21436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:40.252 00:27:40.252 Run status group 0 (all jobs): 00:27:40.252 READ: bw=41.8MiB/s (43.9MB/s), 41.8MiB/s-41.8MiB/s (43.9MB/s-43.9MB/s), io=83.9MiB (88.0MB), run=2005-2005msec 00:27:40.252 WRITE: bw=41.8MiB/s (43.8MB/s), 41.8MiB/s-41.8MiB/s (43.8MB/s-43.8MB/s), io=83.7MiB (87.8MB), run=2005-2005msec 00:27:40.252 17:30:36 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:40.252 17:30:36 -- host/fio.sh@74 -- # sync 00:27:40.252 17:30:36 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:48.370 17:30:44 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:48.370 17:30:44 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:53.647 17:30:49 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:53.647 17:30:50 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:56.934 17:30:53 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:56.934 17:30:53 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:56.934 17:30:53 -- host/fio.sh@86 -- # nvmftestfini 00:27:56.934 17:30:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:56.934 17:30:53 -- nvmf/common.sh@116 -- # sync 00:27:56.934 17:30:53 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:56.934 17:30:53 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:56.934 17:30:53 -- nvmf/common.sh@119 -- # set +e 00:27:56.934 17:30:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:56.934 17:30:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:56.934 rmmod nvme_rdma 00:27:56.934 rmmod nvme_fabrics 00:27:56.934 17:30:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:56.934 17:30:53 -- nvmf/common.sh@123 -- # set -e 00:27:56.934 17:30:53 -- nvmf/common.sh@124 -- # return 0 00:27:56.934 17:30:53 -- nvmf/common.sh@477 -- # '[' -n 1481550 ']' 00:27:56.934 17:30:53 -- nvmf/common.sh@478 -- # killprocess 1481550 00:27:56.934 17:30:53 -- common/autotest_common.sh@936 -- # '[' -z 1481550 ']' 00:27:56.934 17:30:53 -- common/autotest_common.sh@940 -- # kill -0 1481550 00:27:56.934 17:30:53 -- common/autotest_common.sh@941 -- # uname 00:27:56.934 17:30:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:56.934 17:30:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1481550 00:27:56.934 17:30:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:56.934 17:30:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:56.934 17:30:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1481550' 00:27:56.934 killing process with pid 1481550 00:27:56.934 17:30:53 -- common/autotest_common.sh@955 -- # kill 1481550 00:27:56.934 17:30:53 -- common/autotest_common.sh@960 -- # wait 1481550 00:27:57.194 17:30:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:57.194 17:30:53 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:57.194 00:27:57.194 real 0m50.003s 00:27:57.194 user 3m38.156s 00:27:57.194 sys 0m7.644s 00:27:57.194 17:30:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:57.194 17:30:53 -- common/autotest_common.sh@10 -- # set +x 00:27:57.194 ************************************ 00:27:57.194 END TEST nvmf_fio_host 00:27:57.194 ************************************ 00:27:57.194 17:30:53 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:57.194 17:30:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:57.194 17:30:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:57.194 17:30:53 -- common/autotest_common.sh@10 -- # set +x 00:27:57.194 ************************************ 00:27:57.194 START TEST nvmf_failover 00:27:57.194 ************************************ 00:27:57.194 17:30:53 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:57.194 * Looking for test storage... 00:27:57.194 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:57.194 17:30:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:57.194 17:30:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:57.194 17:30:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:57.194 17:30:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:57.194 17:30:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:57.194 17:30:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:57.194 17:30:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:57.194 17:30:53 -- scripts/common.sh@335 -- # IFS=.-: 00:27:57.194 17:30:53 -- scripts/common.sh@335 -- # read -ra ver1 00:27:57.194 17:30:53 -- scripts/common.sh@336 -- # IFS=.-: 00:27:57.194 17:30:53 -- scripts/common.sh@336 -- # read -ra ver2 00:27:57.194 17:30:53 -- scripts/common.sh@337 -- # local 'op=<' 00:27:57.194 17:30:53 -- scripts/common.sh@339 -- # ver1_l=2 00:27:57.194 17:30:53 -- scripts/common.sh@340 -- # ver2_l=1 00:27:57.194 17:30:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:57.194 17:30:53 -- scripts/common.sh@343 -- # case "$op" in 00:27:57.194 17:30:53 -- scripts/common.sh@344 -- # : 1 00:27:57.194 17:30:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:57.194 17:30:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.194 17:30:53 -- scripts/common.sh@364 -- # decimal 1 00:27:57.194 17:30:53 -- scripts/common.sh@352 -- # local d=1 00:27:57.194 17:30:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:57.194 17:30:53 -- scripts/common.sh@354 -- # echo 1 00:27:57.194 17:30:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:57.194 17:30:53 -- scripts/common.sh@365 -- # decimal 2 00:27:57.194 17:30:53 -- scripts/common.sh@352 -- # local d=2 00:27:57.194 17:30:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:57.194 17:30:53 -- scripts/common.sh@354 -- # echo 2 00:27:57.194 17:30:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:57.194 17:30:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:57.194 17:30:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:57.194 17:30:53 -- scripts/common.sh@367 -- # return 0 00:27:57.194 17:30:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.194 17:30:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:57.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.194 --rc genhtml_branch_coverage=1 00:27:57.194 --rc genhtml_function_coverage=1 00:27:57.194 --rc genhtml_legend=1 00:27:57.194 --rc geninfo_all_blocks=1 00:27:57.194 --rc geninfo_unexecuted_blocks=1 00:27:57.194 00:27:57.194 ' 00:27:57.194 17:30:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:57.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.194 --rc genhtml_branch_coverage=1 00:27:57.194 --rc genhtml_function_coverage=1 00:27:57.194 --rc genhtml_legend=1 00:27:57.194 --rc geninfo_all_blocks=1 00:27:57.194 --rc geninfo_unexecuted_blocks=1 00:27:57.194 00:27:57.194 ' 00:27:57.194 17:30:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:57.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.194 --rc genhtml_branch_coverage=1 00:27:57.194 --rc genhtml_function_coverage=1 00:27:57.194 --rc genhtml_legend=1 00:27:57.194 --rc geninfo_all_blocks=1 00:27:57.194 --rc geninfo_unexecuted_blocks=1 00:27:57.194 00:27:57.194 ' 00:27:57.194 17:30:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:57.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.194 --rc genhtml_branch_coverage=1 00:27:57.194 --rc genhtml_function_coverage=1 00:27:57.194 --rc genhtml_legend=1 00:27:57.194 --rc geninfo_all_blocks=1 00:27:57.194 --rc geninfo_unexecuted_blocks=1 00:27:57.194 00:27:57.194 ' 00:27:57.194 17:30:53 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.194 17:30:53 -- nvmf/common.sh@7 -- # uname -s 00:27:57.194 17:30:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.194 17:30:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.194 17:30:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.194 17:30:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.194 17:30:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.194 17:30:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.194 17:30:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.194 17:30:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.194 17:30:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.194 17:30:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.194 17:30:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:57.194 17:30:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:57.194 17:30:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.194 17:30:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.194 17:30:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.194 17:30:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:57.194 17:30:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.194 17:30:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.194 17:30:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.194 17:30:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.194 17:30:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.194 17:30:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.194 17:30:53 -- paths/export.sh@5 -- # export PATH 00:27:57.194 17:30:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.194 17:30:53 -- nvmf/common.sh@46 -- # : 0 00:27:57.194 17:30:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:57.194 17:30:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:57.194 17:30:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:57.194 17:30:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.195 17:30:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.195 17:30:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:57.195 17:30:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:57.195 17:30:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:57.195 17:30:53 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:57.195 17:30:53 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:57.195 17:30:53 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:57.195 17:30:53 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:57.195 17:30:53 -- host/failover.sh@18 -- # nvmftestinit 00:27:57.195 17:30:53 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:57.195 17:30:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.195 17:30:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:57.195 17:30:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:57.195 17:30:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:57.195 17:30:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.195 17:30:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.195 17:30:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.195 17:30:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:57.195 17:30:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:57.195 17:30:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:57.195 17:30:53 -- common/autotest_common.sh@10 -- # set +x 00:28:03.766 17:31:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:03.766 17:31:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:03.766 17:31:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:03.766 17:31:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:03.766 17:31:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:03.766 17:31:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:03.766 17:31:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:03.766 17:31:00 -- nvmf/common.sh@294 -- # net_devs=() 00:28:03.766 17:31:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:03.766 17:31:00 -- nvmf/common.sh@295 -- # e810=() 00:28:03.766 17:31:00 -- nvmf/common.sh@295 -- # local -ga e810 00:28:03.766 17:31:00 -- nvmf/common.sh@296 -- # x722=() 00:28:03.766 17:31:00 -- nvmf/common.sh@296 -- # local -ga x722 00:28:03.766 17:31:00 -- nvmf/common.sh@297 -- # mlx=() 00:28:03.766 17:31:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:03.766 17:31:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.766 17:31:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.766 17:31:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.766 17:31:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.766 17:31:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.766 17:31:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.766 17:31:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.766 17:31:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.766 17:31:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.766 17:31:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.766 17:31:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.766 17:31:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:03.766 17:31:00 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:03.766 17:31:00 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:03.766 17:31:00 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:03.766 17:31:00 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:03.766 17:31:00 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:03.766 17:31:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:03.766 17:31:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:03.766 17:31:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:03.766 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:03.766 17:31:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:03.766 17:31:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:03.766 17:31:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:03.766 17:31:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:03.766 17:31:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:03.766 17:31:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:03.766 17:31:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:03.766 17:31:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:03.766 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:03.766 17:31:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:03.766 17:31:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:03.766 17:31:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:03.767 17:31:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:03.767 17:31:00 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:03.767 17:31:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.767 17:31:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:03.767 17:31:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.767 17:31:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:03.767 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:03.767 17:31:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.767 17:31:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:03.767 17:31:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.767 17:31:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:03.767 17:31:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.767 17:31:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:03.767 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:03.767 17:31:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.767 17:31:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:03.767 17:31:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:03.767 17:31:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:03.767 17:31:00 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:03.767 17:31:00 -- nvmf/common.sh@57 -- # uname 00:28:03.767 17:31:00 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:03.767 17:31:00 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:03.767 17:31:00 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:03.767 17:31:00 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:03.767 17:31:00 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:03.767 17:31:00 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:03.767 17:31:00 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:03.767 17:31:00 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:03.767 17:31:00 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:03.767 17:31:00 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:03.767 17:31:00 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:03.767 17:31:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:03.767 17:31:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:03.767 17:31:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:03.767 17:31:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:03.767 17:31:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:03.767 17:31:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:03.767 17:31:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.767 17:31:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:03.767 17:31:00 -- nvmf/common.sh@104 -- # continue 2 00:28:03.767 17:31:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:03.767 17:31:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.767 17:31:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.767 17:31:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:03.767 17:31:00 -- nvmf/common.sh@104 -- # continue 2 00:28:03.767 17:31:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:03.767 17:31:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:03.767 17:31:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:03.767 17:31:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:03.767 17:31:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:03.767 17:31:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:03.767 17:31:00 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:03.767 17:31:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:03.767 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:03.767 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:03.767 altname enp217s0f0np0 00:28:03.767 altname ens818f0np0 00:28:03.767 inet 192.168.100.8/24 scope global mlx_0_0 00:28:03.767 valid_lft forever preferred_lft forever 00:28:03.767 17:31:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:03.767 17:31:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:03.767 17:31:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:03.767 17:31:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:03.767 17:31:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:03.767 17:31:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:03.767 17:31:00 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:03.767 17:31:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:03.767 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:03.767 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:03.767 altname enp217s0f1np1 00:28:03.767 altname ens818f1np1 00:28:03.767 inet 192.168.100.9/24 scope global mlx_0_1 00:28:03.767 valid_lft forever preferred_lft forever 00:28:03.767 17:31:00 -- nvmf/common.sh@410 -- # return 0 00:28:03.767 17:31:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:03.767 17:31:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:03.767 17:31:00 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:03.767 17:31:00 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:03.767 17:31:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:03.767 17:31:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:03.767 17:31:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:03.767 17:31:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:03.767 17:31:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:03.767 17:31:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:03.767 17:31:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.767 17:31:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:03.767 17:31:00 -- nvmf/common.sh@104 -- # continue 2 00:28:03.767 17:31:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:03.767 17:31:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.767 17:31:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.767 17:31:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:03.767 17:31:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:03.767 17:31:00 -- nvmf/common.sh@104 -- # continue 2 00:28:03.767 17:31:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:03.767 17:31:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:03.767 17:31:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:03.767 17:31:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:03.767 17:31:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:03.767 17:31:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:03.767 17:31:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:03.767 17:31:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:03.767 17:31:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:03.767 17:31:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:03.767 17:31:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:03.767 17:31:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:03.767 17:31:00 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:03.767 192.168.100.9' 00:28:03.767 17:31:00 -- nvmf/common.sh@445 -- # head -n 1 00:28:03.767 17:31:00 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:03.767 192.168.100.9' 00:28:03.767 17:31:00 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:03.767 17:31:00 -- nvmf/common.sh@446 -- # head -n 1 00:28:03.767 17:31:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:03.767 192.168.100.9' 00:28:03.767 17:31:00 -- nvmf/common.sh@446 -- # tail -n +2 00:28:03.767 17:31:00 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:03.767 17:31:00 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:03.767 17:31:00 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:03.767 17:31:00 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:03.767 17:31:00 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:03.767 17:31:00 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:03.767 17:31:00 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:03.767 17:31:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:03.767 17:31:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:03.767 17:31:00 -- common/autotest_common.sh@10 -- # set +x 00:28:03.767 17:31:00 -- nvmf/common.sh@469 -- # nvmfpid=1492594 00:28:03.767 17:31:00 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:03.767 17:31:00 -- nvmf/common.sh@470 -- # waitforlisten 1492594 00:28:03.767 17:31:00 -- common/autotest_common.sh@829 -- # '[' -z 1492594 ']' 00:28:03.767 17:31:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.767 17:31:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.767 17:31:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.767 17:31:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.767 17:31:00 -- common/autotest_common.sh@10 -- # set +x 00:28:04.027 [2024-12-14 17:31:00.489811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:04.027 [2024-12-14 17:31:00.489862] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.027 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.027 [2024-12-14 17:31:00.559885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:04.027 [2024-12-14 17:31:00.596859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:04.027 [2024-12-14 17:31:00.596985] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.027 [2024-12-14 17:31:00.596995] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.027 [2024-12-14 17:31:00.597007] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.027 [2024-12-14 17:31:00.597130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.027 [2024-12-14 17:31:00.597212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.027 [2024-12-14 17:31:00.597214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.964 17:31:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.964 17:31:01 -- common/autotest_common.sh@862 -- # return 0 00:28:04.964 17:31:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:04.964 17:31:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.964 17:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:04.964 17:31:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.964 17:31:01 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:04.964 [2024-12-14 17:31:01.541168] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1147900/0x114bdb0) succeed. 00:28:04.964 [2024-12-14 17:31:01.550300] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1148e00/0x118d450) succeed. 00:28:05.223 17:31:01 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:05.223 Malloc0 00:28:05.223 17:31:01 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:05.482 17:31:02 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:05.741 17:31:02 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:05.741 [2024-12-14 17:31:02.369726] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:05.741 17:31:02 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:06.000 [2024-12-14 17:31:02.562078] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:06.000 17:31:02 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:06.259 [2024-12-14 17:31:02.746787] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:28:06.259 17:31:02 -- host/failover.sh@31 -- # bdevperf_pid=1493069 00:28:06.259 17:31:02 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:06.259 17:31:02 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:06.259 17:31:02 -- host/failover.sh@34 -- # waitforlisten 1493069 /var/tmp/bdevperf.sock 00:28:06.259 17:31:02 -- common/autotest_common.sh@829 -- # '[' -z 1493069 ']' 00:28:06.259 17:31:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:06.259 17:31:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:06.259 17:31:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:06.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:06.259 17:31:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:06.259 17:31:02 -- common/autotest_common.sh@10 -- # set +x 00:28:07.277 17:31:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.277 17:31:03 -- common/autotest_common.sh@862 -- # return 0 00:28:07.277 17:31:03 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:07.277 NVMe0n1 00:28:07.277 17:31:03 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:07.537 00:28:07.537 17:31:04 -- host/failover.sh@39 -- # run_test_pid=1493257 00:28:07.537 17:31:04 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:07.537 17:31:04 -- host/failover.sh@41 -- # sleep 1 00:28:08.915 17:31:05 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:08.915 17:31:05 -- host/failover.sh@45 -- # sleep 3 00:28:12.204 17:31:08 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:12.204 00:28:12.204 17:31:08 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:12.204 17:31:08 -- host/failover.sh@50 -- # sleep 3 00:28:15.492 17:31:11 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:15.492 [2024-12-14 17:31:11.959882] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:15.492 17:31:11 -- host/failover.sh@55 -- # sleep 1 00:28:16.429 17:31:12 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:16.688 17:31:13 -- host/failover.sh@59 -- # wait 1493257 00:28:23.260 0 00:28:23.260 17:31:19 -- host/failover.sh@61 -- # killprocess 1493069 00:28:23.260 17:31:19 -- common/autotest_common.sh@936 -- # '[' -z 1493069 ']' 00:28:23.260 17:31:19 -- common/autotest_common.sh@940 -- # kill -0 1493069 00:28:23.260 17:31:19 -- common/autotest_common.sh@941 -- # uname 00:28:23.260 17:31:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:23.260 17:31:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1493069 00:28:23.260 17:31:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:23.260 17:31:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:23.260 17:31:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1493069' 00:28:23.260 killing process with pid 1493069 00:28:23.260 17:31:19 -- common/autotest_common.sh@955 -- # kill 1493069 00:28:23.260 17:31:19 -- common/autotest_common.sh@960 -- # wait 1493069 00:28:23.260 17:31:19 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:23.260 [2024-12-14 17:31:02.818368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:23.260 [2024-12-14 17:31:02.818428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493069 ] 00:28:23.260 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.260 [2024-12-14 17:31:02.890101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.260 [2024-12-14 17:31:02.926908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.260 Running I/O for 15 seconds... 00:28:23.260 [2024-12-14 17:31:06.349743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.260 [2024-12-14 17:31:06.349786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64941 cdw0:1aa3610 sqhd:0716 p:1 m:0 dnr:0 00:28:23.260 [2024-12-14 17:31:06.349798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.260 [2024-12-14 17:31:06.349808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64941 cdw0:1aa3610 sqhd:0716 p:1 m:0 dnr:0 00:28:23.260 [2024-12-14 17:31:06.349818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.260 [2024-12-14 17:31:06.349827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64941 cdw0:1aa3610 sqhd:0716 p:1 m:0 dnr:0 00:28:23.260 [2024-12-14 17:31:06.349837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.260 [2024-12-14 17:31:06.349846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64941 cdw0:1aa3610 sqhd:0716 p:1 m:0 dnr:0 00:28:23.260 [2024-12-14 17:31:06.351667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:23.260 [2024-12-14 17:31:06.351686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.260 [2024-12-14 17:31:06.351702] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:23.260 [2024-12-14 17:31:06.351712] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:23.260 [2024-12-14 17:31:06.351730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.260 [2024-12-14 17:31:06.351741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.351775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.261 [2024-12-14 17:31:06.351787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.351820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183e00 00:28:23.261 [2024-12-14 17:31:06.351831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.351848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.351859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.351890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.261 [2024-12-14 17:31:06.351901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.351937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183e00 00:28:23.261 [2024-12-14 17:31:06.351947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.351963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183e00 00:28:23.261 [2024-12-14 17:31:06.351974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.351990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.352000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.261 [2024-12-14 17:31:06.352041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4780 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.352067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.261 [2024-12-14 17:31:06.352109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.352135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.261 [2024-12-14 17:31:06.352161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183e00 00:28:23.261 [2024-12-14 17:31:06.352202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183e00 00:28:23.261 [2024-12-14 17:31:06.352229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.261 [2024-12-14 17:31:06.352254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183e00 00:28:23.261 [2024-12-14 17:31:06.352283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.261 [2024-12-14 17:31:06.352309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183e00 00:28:23.261 [2024-12-14 17:31:06.352350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183e00 00:28:23.261 [2024-12-14 17:31:06.352376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.352418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.261 [2024-12-14 17:31:06.352458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.352504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.261 [2024-12-14 17:31:06.352544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.352570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.352611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183e00 00:28:23.261 [2024-12-14 17:31:06.352637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.352678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.352721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.261 [2024-12-14 17:31:06.352747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183e00 00:28:23.261 [2024-12-14 17:31:06.352788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.352829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183e00 00:28:23.261 [2024-12-14 17:31:06.352855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.352896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x183b00 00:28:23.261 [2024-12-14 17:31:06.352922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.261 [2024-12-14 17:31:06.352962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.352992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.261 [2024-12-14 17:31:06.353002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.353018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183e00 00:28:23.261 [2024-12-14 17:31:06.353028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.261 [2024-12-14 17:31:06.353059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.262 [2024-12-14 17:31:06.353069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x183b00 00:28:23.262 [2024-12-14 17:31:06.353138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.262 [2024-12-14 17:31:06.353246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x183b00 00:28:23.262 [2024-12-14 17:31:06.353325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x183b00 00:28:23.262 [2024-12-14 17:31:06.353351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.262 [2024-12-14 17:31:06.353432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.262 [2024-12-14 17:31:06.353472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x183b00 00:28:23.262 [2024-12-14 17:31:06.353518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.262 [2024-12-14 17:31:06.353548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x183b00 00:28:23.262 [2024-12-14 17:31:06.353601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x183b00 00:28:23.262 [2024-12-14 17:31:06.353627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x183b00 00:28:23.262 [2024-12-14 17:31:06.353654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.262 [2024-12-14 17:31:06.353776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.262 [2024-12-14 17:31:06.353858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x183b00 00:28:23.262 [2024-12-14 17:31:06.353927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.353984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.353994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.354024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.262 [2024-12-14 17:31:06.354034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.354064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.262 [2024-12-14 17:31:06.354075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.354091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x183b00 00:28:23.262 [2024-12-14 17:31:06.354103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.354119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.354129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.354146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x183b00 00:28:23.262 [2024-12-14 17:31:06.354156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.354186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.354196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.354226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183e00 00:28:23.262 [2024-12-14 17:31:06.354236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.354253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x183b00 00:28:23.262 [2024-12-14 17:31:06.354268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.354284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x183b00 00:28:23.262 [2024-12-14 17:31:06.354294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.262 [2024-12-14 17:31:06.354326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.262 [2024-12-14 17:31:06.354336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183e00 00:28:23.263 [2024-12-14 17:31:06.354362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.354403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183e00 00:28:23.263 [2024-12-14 17:31:06.354429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183e00 00:28:23.263 [2024-12-14 17:31:06.354471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.354517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.263 [2024-12-14 17:31:06.354543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183e00 00:28:23.263 [2024-12-14 17:31:06.354568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.354612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.354639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013886180 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.354682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.354708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.263 [2024-12-14 17:31:06.354734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.263 [2024-12-14 17:31:06.354774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183e00 00:28:23.263 [2024-12-14 17:31:06.354814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.263 [2024-12-14 17:31:06.354841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.263 [2024-12-14 17:31:06.354881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.354921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.354947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.354978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.354988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.355029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.355056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.263 [2024-12-14 17:31:06.355082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183e00 00:28:23.263 [2024-12-14 17:31:06.355108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013877a80 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.355134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.355161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183e00 00:28:23.263 [2024-12-14 17:31:06.355186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183e00 00:28:23.263 [2024-12-14 17:31:06.355214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.263 [2024-12-14 17:31:06.355254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.355295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.263 [2024-12-14 17:31:06.355335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.263 [2024-12-14 17:31:06.355375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x183b00 00:28:23.263 [2024-12-14 17:31:06.355416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183e00 00:28:23.263 [2024-12-14 17:31:06.355444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.263 [2024-12-14 17:31:06.355485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183e00 00:28:23.263 [2024-12-14 17:31:06.355530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183e00 00:28:23.263 [2024-12-14 17:31:06.355571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.263 [2024-12-14 17:31:06.355611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.263 [2024-12-14 17:31:06.355627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.264 [2024-12-14 17:31:06.355637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.355653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.264 [2024-12-14 17:31:06.355663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.355679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:06.355689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.355705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x183b00 00:28:23.264 [2024-12-14 17:31:06.355716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.355746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.264 [2024-12-14 17:31:06.355756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.355773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:06.355782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.355799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x183b00 00:28:23.264 [2024-12-14 17:31:06.355809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.355827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:06.355837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.355868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:06.355878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.355908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x183b00 00:28:23.264 [2024-12-14 17:31:06.355918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.355935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:06.355944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.355961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:06.355971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.355988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:06.355997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.356028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.264 [2024-12-14 17:31:06.356038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64941 cdw0:93ff2000 sqhd:cde6 p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.370449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.264 [2024-12-14 17:31:06.370469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.264 [2024-12-14 17:31:06.370478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92448 len:8 PRP1 0x0 PRP2 0x0 00:28:23.264 [2024-12-14 17:31:06.370488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:06.370541] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:23.264 [2024-12-14 17:31:06.370553] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:23.264 [2024-12-14 17:31:06.370581] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:23.264 [2024-12-14 17:31:06.372364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.264 [2024-12-14 17:31:06.406182] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:23.264 [2024-12-14 17:31:09.780927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x184300 00:28:23.264 [2024-12-14 17:31:09.780967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.780988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x184300 00:28:23.264 [2024-12-14 17:31:09.780998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:09.781018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x184300 00:28:23.264 [2024-12-14 17:31:09.781038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:09.781057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.264 [2024-12-14 17:31:09.781077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x184300 00:28:23.264 [2024-12-14 17:31:09.781097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.264 [2024-12-14 17:31:09.781116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0x184300 00:28:23.264 [2024-12-14 17:31:09.781135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.264 [2024-12-14 17:31:09.781155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:09.781174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:09.781194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:09.781215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:09.781234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x184300 00:28:23.264 [2024-12-14 17:31:09.781253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x184300 00:28:23.264 [2024-12-14 17:31:09.781273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x184300 00:28:23.264 [2024-12-14 17:31:09.781292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183e00 00:28:23.264 [2024-12-14 17:31:09.781312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.264 [2024-12-14 17:31:09.781323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.264 [2024-12-14 17:31:09.781332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.265 [2024-12-14 17:31:09.781351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x184300 00:28:23.265 [2024-12-14 17:31:09.781371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x184300 00:28:23.265 [2024-12-14 17:31:09.781390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x184300 00:28:23.265 [2024-12-14 17:31:09.781410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x184300 00:28:23.265 [2024-12-14 17:31:09.781430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x184300 00:28:23.265 [2024-12-14 17:31:09.781474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x184300 00:28:23.265 [2024-12-14 17:31:09.781595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.265 [2024-12-14 17:31:09.781635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x184300 00:28:23.265 [2024-12-14 17:31:09.781655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.265 [2024-12-14 17:31:09.781694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x184300 00:28:23.265 [2024-12-14 17:31:09.781714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.265 [2024-12-14 17:31:09.781734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.265 [2024-12-14 17:31:09.781753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.265 [2024-12-14 17:31:09.781772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x184300 00:28:23.265 [2024-12-14 17:31:09.781811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.265 [2024-12-14 17:31:09.781830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x184300 00:28:23.265 [2024-12-14 17:31:09.781929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.265 [2024-12-14 17:31:09.781949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183e00 00:28:23.265 [2024-12-14 17:31:09.781988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.265 [2024-12-14 17:31:09.781998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.266 [2024-12-14 17:31:09.782047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.266 [2024-12-14 17:31:09.782183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.266 [2024-12-14 17:31:09.782221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c9200 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.266 [2024-12-14 17:31:09.782339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.266 [2024-12-14 17:31:09.782397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013877a80 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.266 [2024-12-14 17:31:09.782435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.266 [2024-12-14 17:31:09.782517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.266 [2024-12-14 17:31:09.782555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x184300 00:28:23.266 [2024-12-14 17:31:09.782653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.266 [2024-12-14 17:31:09.782672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.266 [2024-12-14 17:31:09.782722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183e00 00:28:23.266 [2024-12-14 17:31:09.782731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.782750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183e00 00:28:23.267 [2024-12-14 17:31:09.782769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.782790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.782810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.782829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.267 [2024-12-14 17:31:09.782848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183e00 00:28:23.267 [2024-12-14 17:31:09.782867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.782889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.782909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.782928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.782948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183e00 00:28:23.267 [2024-12-14 17:31:09.782967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.782987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.782997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183e00 00:28:23.267 [2024-12-14 17:31:09.783007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.783027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183e00 00:28:23.267 [2024-12-14 17:31:09.783047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.783066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.267 [2024-12-14 17:31:09.783085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.783105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.783124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183e00 00:28:23.267 [2024-12-14 17:31:09.783143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183e00 00:28:23.267 [2024-12-14 17:31:09.783162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.267 [2024-12-14 17:31:09.783181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.783202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.783222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183e00 00:28:23.267 [2024-12-14 17:31:09.783242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183e00 00:28:23.267 [2024-12-14 17:31:09.783261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.783281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.783300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.267 [2024-12-14 17:31:09.783319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x184300 00:28:23.267 [2024-12-14 17:31:09.783338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183e00 00:28:23.267 [2024-12-14 17:31:09.783358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.267 [2024-12-14 17:31:09.783377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183e00 00:28:23.267 [2024-12-14 17:31:09.783397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.783407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.267 [2024-12-14 17:31:09.783416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.793276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.267 [2024-12-14 17:31:09.793290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.793302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183e00 00:28:23.267 [2024-12-14 17:31:09.793316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64943 cdw0:93ff2000 sqhd:af7c p:1 m:0 dnr:0 00:28:23.267 [2024-12-14 17:31:09.795087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.267 [2024-12-14 17:31:09.795101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.267 [2024-12-14 17:31:09.795110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63280 len:8 PRP1 0x0 PRP2 0x0 00:28:23.268 [2024-12-14 17:31:09.795119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:09.795158] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:28:23.268 [2024-12-14 17:31:09.795170] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:28:23.268 [2024-12-14 17:31:09.795181] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.268 [2024-12-14 17:31:09.795213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.268 [2024-12-14 17:31:09.795224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64943 cdw0:0 sqhd:8e3a p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:09.795234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.268 [2024-12-14 17:31:09.795242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64943 cdw0:0 sqhd:8e3a p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:09.795252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.268 [2024-12-14 17:31:09.795261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64943 cdw0:0 sqhd:8e3a p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:09.795270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.268 [2024-12-14 17:31:09.795279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64943 cdw0:0 sqhd:8e3a p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:09.812239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:23.268 [2024-12-14 17:31:09.812261] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:23.268 [2024-12-14 17:31:09.812272] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:23.268 [2024-12-14 17:31:09.813921] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.268 [2024-12-14 17:31:09.844172] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:23.268 [2024-12-14 17:31:14.152492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.268 [2024-12-14 17:31:14.152536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.268 [2024-12-14 17:31:14.152563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.152585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183e00 00:28:23.268 [2024-12-14 17:31:14.152610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.268 [2024-12-14 17:31:14.152630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183e00 00:28:23.268 [2024-12-14 17:31:14.152649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.268 [2024-12-14 17:31:14.152669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.152688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183e00 00:28:23.268 [2024-12-14 17:31:14.152708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183e00 00:28:23.268 [2024-12-14 17:31:14.152727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.152746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.152766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183e00 00:28:23.268 [2024-12-14 17:31:14.152786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.152805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f0500 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.152826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.152845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183e00 00:28:23.268 [2024-12-14 17:31:14.152865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.268 [2024-12-14 17:31:14.152884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.152904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.152923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183e00 00:28:23.268 [2024-12-14 17:31:14.152942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.152962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183e00 00:28:23.268 [2024-12-14 17:31:14.152981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.152992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183e00 00:28:23.268 [2024-12-14 17:31:14.153001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.153011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.268 [2024-12-14 17:31:14.153020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.153030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e4f80 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.153041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.153051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.268 [2024-12-14 17:31:14.153060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.153070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.268 [2024-12-14 17:31:14.153079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.153089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.153098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.153109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x183b00 00:28:23.268 [2024-12-14 17:31:14.153118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.268 [2024-12-14 17:31:14.153128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.269 [2024-12-14 17:31:14.153137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.269 [2024-12-14 17:31:14.153156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.269 [2024-12-14 17:31:14.153194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x183b00 00:28:23.269 [2024-12-14 17:31:14.153213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a0e80 len:0x1000 key:0x183b00 00:28:23.269 [2024-12-14 17:31:14.153232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x183b00 00:28:23.269 [2024-12-14 17:31:14.153310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x183b00 00:28:23.269 [2024-12-14 17:31:14.153329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.269 [2024-12-14 17:31:14.153367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.269 [2024-12-14 17:31:14.153444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x183b00 00:28:23.269 [2024-12-14 17:31:14.153484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x183b00 00:28:23.269 [2024-12-14 17:31:14.153506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x183b00 00:28:23.269 [2024-12-14 17:31:14.153525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.269 [2024-12-14 17:31:14.153603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.269 [2024-12-14 17:31:14.153642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x183b00 00:28:23.269 [2024-12-14 17:31:14.153661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.269 [2024-12-14 17:31:14.153680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.269 [2024-12-14 17:31:14.153720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183e00 00:28:23.269 [2024-12-14 17:31:14.153758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.269 [2024-12-14 17:31:14.153769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183e00 00:28:23.270 [2024-12-14 17:31:14.153777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.153787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183e00 00:28:23.270 [2024-12-14 17:31:14.153796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.153807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.153816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.153827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.153835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.153845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.270 [2024-12-14 17:31:14.153854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.153865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.270 [2024-12-14 17:31:14.153873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.153884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.270 [2024-12-14 17:31:14.153892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.153903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.153911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.153922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183e00 00:28:23.270 [2024-12-14 17:31:14.153932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.153943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.153951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.153962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013879b80 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.153970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.153981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183e00 00:28:23.270 [2024-12-14 17:31:14.153990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.270 [2024-12-14 17:31:14.154009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.154027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.270 [2024-12-14 17:31:14.154048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.270 [2024-12-14 17:31:14.154067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.154086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.270 [2024-12-14 17:31:14.154105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.154128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183e00 00:28:23.270 [2024-12-14 17:31:14.154147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.154169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183e00 00:28:23.270 [2024-12-14 17:31:14.154188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.270 [2024-12-14 17:31:14.154207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.270 [2024-12-14 17:31:14.154227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.270 [2024-12-14 17:31:14.154245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183e00 00:28:23.270 [2024-12-14 17:31:14.154264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.154284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.154303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.154323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183e00 00:28:23.270 [2024-12-14 17:31:14.154343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.154362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.154382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.270 [2024-12-14 17:31:14.154401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183e00 00:28:23.270 [2024-12-14 17:31:14.154420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183e00 00:28:23.270 [2024-12-14 17:31:14.154441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.154461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x183b00 00:28:23.270 [2024-12-14 17:31:14.154481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.270 [2024-12-14 17:31:14.154503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.270 [2024-12-14 17:31:14.154513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183e00 00:28:23.271 [2024-12-14 17:31:14.154522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183e00 00:28:23.271 [2024-12-14 17:31:14.154542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183e00 00:28:23.271 [2024-12-14 17:31:14.154561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.271 [2024-12-14 17:31:14.154580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183e00 00:28:23.271 [2024-12-14 17:31:14.154600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183e00 00:28:23.271 [2024-12-14 17:31:14.154624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.271 [2024-12-14 17:31:14.154643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x183b00 00:28:23.271 [2024-12-14 17:31:14.154663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183e00 00:28:23.271 [2024-12-14 17:31:14.154682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x183b00 00:28:23.271 [2024-12-14 17:31:14.154702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.271 [2024-12-14 17:31:14.154721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x183b00 00:28:23.271 [2024-12-14 17:31:14.154740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x183b00 00:28:23.271 [2024-12-14 17:31:14.154760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.271 [2024-12-14 17:31:14.154780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x183b00 00:28:23.271 [2024-12-14 17:31:14.154799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x183b00 00:28:23.271 [2024-12-14 17:31:14.154818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x183b00 00:28:23.271 [2024-12-14 17:31:14.154839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183e00 00:28:23.271 [2024-12-14 17:31:14.154858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x183b00 00:28:23.271 [2024-12-14 17:31:14.154877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.271 [2024-12-14 17:31:14.154897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.271 [2024-12-14 17:31:14.154916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.271 [2024-12-14 17:31:14.154934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x183b00 00:28:23.271 [2024-12-14 17:31:14.154954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x183b00 00:28:23.271 [2024-12-14 17:31:14.154973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.154983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.271 [2024-12-14 17:31:14.154992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.155002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183e00 00:28:23.271 [2024-12-14 17:31:14.155011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64945 cdw0:93ff2000 sqhd:b98c p:1 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.156922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.271 [2024-12-14 17:31:14.156936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.271 [2024-12-14 17:31:14.156945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102160 len:8 PRP1 0x0 PRP2 0x0 00:28:23.271 [2024-12-14 17:31:14.156954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.271 [2024-12-14 17:31:14.156996] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:28:23.271 [2024-12-14 17:31:14.157011] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:28:23.271 [2024-12-14 17:31:14.157021] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.271 [2024-12-14 17:31:14.158993] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.271 [2024-12-14 17:31:14.173305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:23.271 [2024-12-14 17:31:14.206626] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:23.271 00:28:23.271 Latency(us) 00:28:23.271 [2024-12-14T16:31:19.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.271 [2024-12-14T16:31:19.955Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:23.271 Verification LBA range: start 0x0 length 0x4000 00:28:23.271 NVMe0n1 : 15.00 20301.42 79.30 308.98 0.00 6198.05 432.54 1040187.39 00:28:23.271 [2024-12-14T16:31:19.955Z] =================================================================================================================== 00:28:23.271 [2024-12-14T16:31:19.955Z] Total : 20301.42 79.30 308.98 0.00 6198.05 432.54 1040187.39 00:28:23.271 Received shutdown signal, test time was about 15.000000 seconds 00:28:23.271 00:28:23.271 Latency(us) 00:28:23.271 [2024-12-14T16:31:19.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.271 [2024-12-14T16:31:19.955Z] =================================================================================================================== 00:28:23.271 [2024-12-14T16:31:19.955Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.271 17:31:19 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:23.271 17:31:19 -- host/failover.sh@65 -- # count=3 00:28:23.271 17:31:19 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:23.271 17:31:19 -- host/failover.sh@73 -- # bdevperf_pid=1495867 00:28:23.271 17:31:19 -- host/failover.sh@75 -- # waitforlisten 1495867 /var/tmp/bdevperf.sock 00:28:23.271 17:31:19 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:23.271 17:31:19 -- common/autotest_common.sh@829 -- # '[' -z 1495867 ']' 00:28:23.271 17:31:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:23.271 17:31:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.271 17:31:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:23.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:23.271 17:31:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.271 17:31:19 -- common/autotest_common.sh@10 -- # set +x 00:28:23.839 17:31:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:23.839 17:31:20 -- common/autotest_common.sh@862 -- # return 0 00:28:23.839 17:31:20 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:24.098 [2024-12-14 17:31:20.609720] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:24.098 17:31:20 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:24.356 [2024-12-14 17:31:20.798395] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:28:24.356 17:31:20 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:24.615 NVMe0n1 00:28:24.615 17:31:21 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:24.874 00:28:24.874 17:31:21 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:24.874 00:28:25.133 17:31:21 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:25.133 17:31:21 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:25.133 17:31:21 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:25.392 17:31:21 -- host/failover.sh@87 -- # sleep 3 00:28:28.679 17:31:24 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:28.679 17:31:24 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:28.679 17:31:25 -- host/failover.sh@90 -- # run_test_pid=1496919 00:28:28.679 17:31:25 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:28.679 17:31:25 -- host/failover.sh@92 -- # wait 1496919 00:28:29.614 0 00:28:29.614 17:31:26 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:29.614 [2024-12-14 17:31:19.618997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:29.614 [2024-12-14 17:31:19.619052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495867 ] 00:28:29.614 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.614 [2024-12-14 17:31:19.689026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.614 [2024-12-14 17:31:19.721795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.614 [2024-12-14 17:31:21.938039] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:29.614 [2024-12-14 17:31:21.938586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.614 [2024-12-14 17:31:21.938612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.614 [2024-12-14 17:31:21.957864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:29.614 [2024-12-14 17:31:21.974011] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:29.614 Running I/O for 1 seconds... 00:28:29.614 00:28:29.614 Latency(us) 00:28:29.614 [2024-12-14T16:31:26.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.614 [2024-12-14T16:31:26.298Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:29.614 Verification LBA range: start 0x0 length 0x4000 00:28:29.614 NVMe0n1 : 1.00 25490.04 99.57 0.00 0.00 4997.75 1271.40 17406.36 00:28:29.614 [2024-12-14T16:31:26.298Z] =================================================================================================================== 00:28:29.614 [2024-12-14T16:31:26.298Z] Total : 25490.04 99.57 0.00 0.00 4997.75 1271.40 17406.36 00:28:29.614 17:31:26 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:29.614 17:31:26 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:29.873 17:31:26 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:30.131 17:31:26 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:30.131 17:31:26 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:30.390 17:31:26 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:30.648 17:31:27 -- host/failover.sh@101 -- # sleep 3 00:28:33.937 17:31:30 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:33.937 17:31:30 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:33.937 17:31:30 -- host/failover.sh@108 -- # killprocess 1495867 00:28:33.937 17:31:30 -- common/autotest_common.sh@936 -- # '[' -z 1495867 ']' 00:28:33.937 17:31:30 -- common/autotest_common.sh@940 -- # kill -0 1495867 00:28:33.937 17:31:30 -- common/autotest_common.sh@941 -- # uname 00:28:33.937 17:31:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:33.937 17:31:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1495867 00:28:33.937 17:31:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:33.937 17:31:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:33.937 17:31:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1495867' 00:28:33.937 killing process with pid 1495867 00:28:33.937 17:31:30 -- common/autotest_common.sh@955 -- # kill 1495867 00:28:33.937 17:31:30 -- common/autotest_common.sh@960 -- # wait 1495867 00:28:33.937 17:31:30 -- host/failover.sh@110 -- # sync 00:28:33.937 17:31:30 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:34.196 17:31:30 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:34.196 17:31:30 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:34.196 17:31:30 -- host/failover.sh@116 -- # nvmftestfini 00:28:34.196 17:31:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:34.196 17:31:30 -- nvmf/common.sh@116 -- # sync 00:28:34.196 17:31:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:34.196 17:31:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:34.196 17:31:30 -- nvmf/common.sh@119 -- # set +e 00:28:34.196 17:31:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:34.196 17:31:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:34.196 rmmod nvme_rdma 00:28:34.196 rmmod nvme_fabrics 00:28:34.196 17:31:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:34.196 17:31:30 -- nvmf/common.sh@123 -- # set -e 00:28:34.196 17:31:30 -- nvmf/common.sh@124 -- # return 0 00:28:34.196 17:31:30 -- nvmf/common.sh@477 -- # '[' -n 1492594 ']' 00:28:34.196 17:31:30 -- nvmf/common.sh@478 -- # killprocess 1492594 00:28:34.196 17:31:30 -- common/autotest_common.sh@936 -- # '[' -z 1492594 ']' 00:28:34.196 17:31:30 -- common/autotest_common.sh@940 -- # kill -0 1492594 00:28:34.196 17:31:30 -- common/autotest_common.sh@941 -- # uname 00:28:34.196 17:31:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:34.196 17:31:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1492594 00:28:34.196 17:31:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:34.196 17:31:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:34.196 17:31:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1492594' 00:28:34.196 killing process with pid 1492594 00:28:34.196 17:31:30 -- common/autotest_common.sh@955 -- # kill 1492594 00:28:34.196 17:31:30 -- common/autotest_common.sh@960 -- # wait 1492594 00:28:34.456 17:31:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:34.456 17:31:31 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:34.456 00:28:34.456 real 0m37.396s 00:28:34.456 user 2m4.804s 00:28:34.456 sys 0m7.341s 00:28:34.456 17:31:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:34.456 17:31:31 -- common/autotest_common.sh@10 -- # set +x 00:28:34.456 ************************************ 00:28:34.456 END TEST nvmf_failover 00:28:34.456 ************************************ 00:28:34.456 17:31:31 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:34.456 17:31:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:34.456 17:31:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:34.456 17:31:31 -- common/autotest_common.sh@10 -- # set +x 00:28:34.456 ************************************ 00:28:34.456 START TEST nvmf_discovery 00:28:34.456 ************************************ 00:28:34.456 17:31:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:34.716 * Looking for test storage... 00:28:34.716 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:34.716 17:31:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:34.716 17:31:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:34.716 17:31:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:34.716 17:31:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:34.716 17:31:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:34.716 17:31:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:34.716 17:31:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:34.716 17:31:31 -- scripts/common.sh@335 -- # IFS=.-: 00:28:34.716 17:31:31 -- scripts/common.sh@335 -- # read -ra ver1 00:28:34.716 17:31:31 -- scripts/common.sh@336 -- # IFS=.-: 00:28:34.716 17:31:31 -- scripts/common.sh@336 -- # read -ra ver2 00:28:34.716 17:31:31 -- scripts/common.sh@337 -- # local 'op=<' 00:28:34.716 17:31:31 -- scripts/common.sh@339 -- # ver1_l=2 00:28:34.716 17:31:31 -- scripts/common.sh@340 -- # ver2_l=1 00:28:34.716 17:31:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:34.716 17:31:31 -- scripts/common.sh@343 -- # case "$op" in 00:28:34.716 17:31:31 -- scripts/common.sh@344 -- # : 1 00:28:34.716 17:31:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:34.716 17:31:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:34.716 17:31:31 -- scripts/common.sh@364 -- # decimal 1 00:28:34.716 17:31:31 -- scripts/common.sh@352 -- # local d=1 00:28:34.716 17:31:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:34.716 17:31:31 -- scripts/common.sh@354 -- # echo 1 00:28:34.716 17:31:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:34.716 17:31:31 -- scripts/common.sh@365 -- # decimal 2 00:28:34.716 17:31:31 -- scripts/common.sh@352 -- # local d=2 00:28:34.716 17:31:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:34.716 17:31:31 -- scripts/common.sh@354 -- # echo 2 00:28:34.716 17:31:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:34.716 17:31:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:34.716 17:31:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:34.716 17:31:31 -- scripts/common.sh@367 -- # return 0 00:28:34.716 17:31:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:34.716 17:31:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:34.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.716 --rc genhtml_branch_coverage=1 00:28:34.716 --rc genhtml_function_coverage=1 00:28:34.716 --rc genhtml_legend=1 00:28:34.716 --rc geninfo_all_blocks=1 00:28:34.716 --rc geninfo_unexecuted_blocks=1 00:28:34.716 00:28:34.716 ' 00:28:34.716 17:31:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:34.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.716 --rc genhtml_branch_coverage=1 00:28:34.716 --rc genhtml_function_coverage=1 00:28:34.716 --rc genhtml_legend=1 00:28:34.716 --rc geninfo_all_blocks=1 00:28:34.716 --rc geninfo_unexecuted_blocks=1 00:28:34.716 00:28:34.716 ' 00:28:34.716 17:31:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:34.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.716 --rc genhtml_branch_coverage=1 00:28:34.716 --rc genhtml_function_coverage=1 00:28:34.716 --rc genhtml_legend=1 00:28:34.716 --rc geninfo_all_blocks=1 00:28:34.716 --rc geninfo_unexecuted_blocks=1 00:28:34.716 00:28:34.716 ' 00:28:34.716 17:31:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:34.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.716 --rc genhtml_branch_coverage=1 00:28:34.716 --rc genhtml_function_coverage=1 00:28:34.716 --rc genhtml_legend=1 00:28:34.716 --rc geninfo_all_blocks=1 00:28:34.716 --rc geninfo_unexecuted_blocks=1 00:28:34.716 00:28:34.716 ' 00:28:34.716 17:31:31 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.716 17:31:31 -- nvmf/common.sh@7 -- # uname -s 00:28:34.716 17:31:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.716 17:31:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.716 17:31:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.716 17:31:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.716 17:31:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.716 17:31:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.716 17:31:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.716 17:31:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.716 17:31:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.716 17:31:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.716 17:31:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:34.716 17:31:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:34.716 17:31:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.716 17:31:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.716 17:31:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.716 17:31:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:34.716 17:31:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.716 17:31:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.716 17:31:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.716 17:31:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.716 17:31:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.716 17:31:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.716 17:31:31 -- paths/export.sh@5 -- # export PATH 00:28:34.716 17:31:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.716 17:31:31 -- nvmf/common.sh@46 -- # : 0 00:28:34.716 17:31:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:34.716 17:31:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:34.716 17:31:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:34.716 17:31:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.716 17:31:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.716 17:31:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:34.716 17:31:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:34.716 17:31:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:34.716 17:31:31 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:28:34.716 17:31:31 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:34.716 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:34.716 17:31:31 -- host/discovery.sh@13 -- # exit 0 00:28:34.716 00:28:34.716 real 0m0.218s 00:28:34.716 user 0m0.133s 00:28:34.716 sys 0m0.099s 00:28:34.716 17:31:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:34.716 17:31:31 -- common/autotest_common.sh@10 -- # set +x 00:28:34.716 ************************************ 00:28:34.716 END TEST nvmf_discovery 00:28:34.716 ************************************ 00:28:34.716 17:31:31 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:34.717 17:31:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:34.717 17:31:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:34.717 17:31:31 -- common/autotest_common.sh@10 -- # set +x 00:28:34.717 ************************************ 00:28:34.717 START TEST nvmf_discovery_remove_ifc 00:28:34.717 ************************************ 00:28:34.717 17:31:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:34.977 * Looking for test storage... 00:28:34.977 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:34.977 17:31:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:34.977 17:31:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:34.977 17:31:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:34.977 17:31:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:34.977 17:31:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:34.977 17:31:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:34.977 17:31:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:34.977 17:31:31 -- scripts/common.sh@335 -- # IFS=.-: 00:28:34.977 17:31:31 -- scripts/common.sh@335 -- # read -ra ver1 00:28:34.977 17:31:31 -- scripts/common.sh@336 -- # IFS=.-: 00:28:34.977 17:31:31 -- scripts/common.sh@336 -- # read -ra ver2 00:28:34.977 17:31:31 -- scripts/common.sh@337 -- # local 'op=<' 00:28:34.977 17:31:31 -- scripts/common.sh@339 -- # ver1_l=2 00:28:34.977 17:31:31 -- scripts/common.sh@340 -- # ver2_l=1 00:28:34.977 17:31:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:34.977 17:31:31 -- scripts/common.sh@343 -- # case "$op" in 00:28:34.977 17:31:31 -- scripts/common.sh@344 -- # : 1 00:28:34.977 17:31:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:34.977 17:31:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:34.977 17:31:31 -- scripts/common.sh@364 -- # decimal 1 00:28:34.977 17:31:31 -- scripts/common.sh@352 -- # local d=1 00:28:34.977 17:31:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:34.977 17:31:31 -- scripts/common.sh@354 -- # echo 1 00:28:34.977 17:31:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:34.977 17:31:31 -- scripts/common.sh@365 -- # decimal 2 00:28:34.977 17:31:31 -- scripts/common.sh@352 -- # local d=2 00:28:34.977 17:31:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:34.977 17:31:31 -- scripts/common.sh@354 -- # echo 2 00:28:34.977 17:31:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:34.977 17:31:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:34.977 17:31:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:34.977 17:31:31 -- scripts/common.sh@367 -- # return 0 00:28:34.977 17:31:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:34.977 17:31:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:34.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.977 --rc genhtml_branch_coverage=1 00:28:34.977 --rc genhtml_function_coverage=1 00:28:34.977 --rc genhtml_legend=1 00:28:34.977 --rc geninfo_all_blocks=1 00:28:34.977 --rc geninfo_unexecuted_blocks=1 00:28:34.977 00:28:34.977 ' 00:28:34.977 17:31:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:34.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.977 --rc genhtml_branch_coverage=1 00:28:34.977 --rc genhtml_function_coverage=1 00:28:34.977 --rc genhtml_legend=1 00:28:34.977 --rc geninfo_all_blocks=1 00:28:34.977 --rc geninfo_unexecuted_blocks=1 00:28:34.977 00:28:34.977 ' 00:28:34.977 17:31:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:34.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.977 --rc genhtml_branch_coverage=1 00:28:34.977 --rc genhtml_function_coverage=1 00:28:34.977 --rc genhtml_legend=1 00:28:34.977 --rc geninfo_all_blocks=1 00:28:34.977 --rc geninfo_unexecuted_blocks=1 00:28:34.977 00:28:34.977 ' 00:28:34.977 17:31:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:34.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.977 --rc genhtml_branch_coverage=1 00:28:34.977 --rc genhtml_function_coverage=1 00:28:34.977 --rc genhtml_legend=1 00:28:34.977 --rc geninfo_all_blocks=1 00:28:34.977 --rc geninfo_unexecuted_blocks=1 00:28:34.977 00:28:34.977 ' 00:28:34.977 17:31:31 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.977 17:31:31 -- nvmf/common.sh@7 -- # uname -s 00:28:34.977 17:31:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.977 17:31:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.977 17:31:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.977 17:31:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.977 17:31:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.977 17:31:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.977 17:31:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.977 17:31:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.977 17:31:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.977 17:31:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.977 17:31:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:34.977 17:31:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:34.977 17:31:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.977 17:31:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.977 17:31:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.977 17:31:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:34.977 17:31:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.977 17:31:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.977 17:31:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.977 17:31:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.977 17:31:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.977 17:31:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.977 17:31:31 -- paths/export.sh@5 -- # export PATH 00:28:34.977 17:31:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.977 17:31:31 -- nvmf/common.sh@46 -- # : 0 00:28:34.977 17:31:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:34.977 17:31:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:34.977 17:31:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:34.977 17:31:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.977 17:31:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.977 17:31:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:34.977 17:31:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:34.977 17:31:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:34.977 17:31:31 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:28:34.977 17:31:31 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:34.977 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:34.977 17:31:31 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:28:34.977 00:28:34.977 real 0m0.199s 00:28:34.977 user 0m0.104s 00:28:34.977 sys 0m0.106s 00:28:34.977 17:31:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:34.977 17:31:31 -- common/autotest_common.sh@10 -- # set +x 00:28:34.977 ************************************ 00:28:34.977 END TEST nvmf_discovery_remove_ifc 00:28:34.977 ************************************ 00:28:34.977 17:31:31 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:28:34.977 17:31:31 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:34.977 17:31:31 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:34.977 17:31:31 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:34.978 17:31:31 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:34.978 17:31:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:34.978 17:31:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:34.978 17:31:31 -- common/autotest_common.sh@10 -- # set +x 00:28:34.978 ************************************ 00:28:34.978 START TEST nvmf_bdevperf 00:28:34.978 ************************************ 00:28:34.978 17:31:31 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:35.238 * Looking for test storage... 00:28:35.238 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:35.238 17:31:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:35.238 17:31:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:35.238 17:31:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:35.238 17:31:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:35.238 17:31:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:35.238 17:31:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:35.238 17:31:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:35.238 17:31:31 -- scripts/common.sh@335 -- # IFS=.-: 00:28:35.238 17:31:31 -- scripts/common.sh@335 -- # read -ra ver1 00:28:35.238 17:31:31 -- scripts/common.sh@336 -- # IFS=.-: 00:28:35.238 17:31:31 -- scripts/common.sh@336 -- # read -ra ver2 00:28:35.238 17:31:31 -- scripts/common.sh@337 -- # local 'op=<' 00:28:35.238 17:31:31 -- scripts/common.sh@339 -- # ver1_l=2 00:28:35.238 17:31:31 -- scripts/common.sh@340 -- # ver2_l=1 00:28:35.238 17:31:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:35.238 17:31:31 -- scripts/common.sh@343 -- # case "$op" in 00:28:35.238 17:31:31 -- scripts/common.sh@344 -- # : 1 00:28:35.238 17:31:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:35.238 17:31:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:35.238 17:31:31 -- scripts/common.sh@364 -- # decimal 1 00:28:35.238 17:31:31 -- scripts/common.sh@352 -- # local d=1 00:28:35.238 17:31:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:35.238 17:31:31 -- scripts/common.sh@354 -- # echo 1 00:28:35.238 17:31:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:35.238 17:31:31 -- scripts/common.sh@365 -- # decimal 2 00:28:35.238 17:31:31 -- scripts/common.sh@352 -- # local d=2 00:28:35.238 17:31:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:35.238 17:31:31 -- scripts/common.sh@354 -- # echo 2 00:28:35.238 17:31:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:35.238 17:31:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:35.238 17:31:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:35.238 17:31:31 -- scripts/common.sh@367 -- # return 0 00:28:35.238 17:31:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:35.238 17:31:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:35.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.238 --rc genhtml_branch_coverage=1 00:28:35.238 --rc genhtml_function_coverage=1 00:28:35.238 --rc genhtml_legend=1 00:28:35.238 --rc geninfo_all_blocks=1 00:28:35.238 --rc geninfo_unexecuted_blocks=1 00:28:35.238 00:28:35.238 ' 00:28:35.238 17:31:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:35.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.238 --rc genhtml_branch_coverage=1 00:28:35.238 --rc genhtml_function_coverage=1 00:28:35.238 --rc genhtml_legend=1 00:28:35.238 --rc geninfo_all_blocks=1 00:28:35.238 --rc geninfo_unexecuted_blocks=1 00:28:35.238 00:28:35.238 ' 00:28:35.238 17:31:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:35.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.238 --rc genhtml_branch_coverage=1 00:28:35.238 --rc genhtml_function_coverage=1 00:28:35.238 --rc genhtml_legend=1 00:28:35.238 --rc geninfo_all_blocks=1 00:28:35.238 --rc geninfo_unexecuted_blocks=1 00:28:35.238 00:28:35.238 ' 00:28:35.238 17:31:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:35.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.238 --rc genhtml_branch_coverage=1 00:28:35.238 --rc genhtml_function_coverage=1 00:28:35.238 --rc genhtml_legend=1 00:28:35.238 --rc geninfo_all_blocks=1 00:28:35.238 --rc geninfo_unexecuted_blocks=1 00:28:35.238 00:28:35.238 ' 00:28:35.238 17:31:31 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:35.238 17:31:31 -- nvmf/common.sh@7 -- # uname -s 00:28:35.238 17:31:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.238 17:31:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.238 17:31:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.238 17:31:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.238 17:31:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.238 17:31:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.238 17:31:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.238 17:31:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.238 17:31:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.238 17:31:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.238 17:31:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:35.238 17:31:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:35.238 17:31:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.238 17:31:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.238 17:31:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:35.238 17:31:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:35.238 17:31:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.238 17:31:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.238 17:31:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.238 17:31:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.238 17:31:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.238 17:31:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.238 17:31:31 -- paths/export.sh@5 -- # export PATH 00:28:35.238 17:31:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.238 17:31:31 -- nvmf/common.sh@46 -- # : 0 00:28:35.238 17:31:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:35.238 17:31:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:35.238 17:31:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:35.238 17:31:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.238 17:31:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.238 17:31:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:35.238 17:31:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:35.238 17:31:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:35.238 17:31:31 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:35.238 17:31:31 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:35.238 17:31:31 -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:35.238 17:31:31 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:35.238 17:31:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.238 17:31:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:35.238 17:31:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:35.238 17:31:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:35.238 17:31:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.238 17:31:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.238 17:31:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.238 17:31:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:35.238 17:31:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:35.238 17:31:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:35.238 17:31:31 -- common/autotest_common.sh@10 -- # set +x 00:28:41.810 17:31:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:41.810 17:31:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:41.810 17:31:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:41.810 17:31:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:41.810 17:31:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:41.810 17:31:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:41.810 17:31:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:41.810 17:31:37 -- nvmf/common.sh@294 -- # net_devs=() 00:28:41.810 17:31:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:41.810 17:31:37 -- nvmf/common.sh@295 -- # e810=() 00:28:41.810 17:31:37 -- nvmf/common.sh@295 -- # local -ga e810 00:28:41.810 17:31:37 -- nvmf/common.sh@296 -- # x722=() 00:28:41.810 17:31:37 -- nvmf/common.sh@296 -- # local -ga x722 00:28:41.810 17:31:37 -- nvmf/common.sh@297 -- # mlx=() 00:28:41.810 17:31:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:41.810 17:31:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.810 17:31:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.810 17:31:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.810 17:31:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.810 17:31:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.810 17:31:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.810 17:31:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.810 17:31:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.810 17:31:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.810 17:31:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.810 17:31:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.810 17:31:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:41.810 17:31:37 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:41.810 17:31:37 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:41.810 17:31:37 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:41.810 17:31:37 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:41.810 17:31:37 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:41.810 17:31:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:41.810 17:31:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:41.810 17:31:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:41.810 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:41.810 17:31:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:41.810 17:31:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:41.810 17:31:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:41.810 17:31:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:41.810 17:31:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:41.810 17:31:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:41.810 17:31:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:41.810 17:31:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:41.811 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:41.811 17:31:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:41.811 17:31:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:41.811 17:31:37 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:41.811 17:31:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.811 17:31:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:41.811 17:31:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.811 17:31:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:41.811 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:41.811 17:31:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.811 17:31:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:41.811 17:31:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.811 17:31:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:41.811 17:31:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.811 17:31:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:41.811 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:41.811 17:31:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.811 17:31:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:41.811 17:31:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:41.811 17:31:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:41.811 17:31:37 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:41.811 17:31:37 -- nvmf/common.sh@57 -- # uname 00:28:41.811 17:31:37 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:41.811 17:31:37 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:41.811 17:31:37 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:41.811 17:31:37 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:41.811 17:31:37 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:41.811 17:31:37 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:41.811 17:31:37 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:41.811 17:31:37 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:41.811 17:31:37 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:41.811 17:31:37 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:41.811 17:31:37 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:41.811 17:31:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:41.811 17:31:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:41.811 17:31:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:41.811 17:31:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:41.811 17:31:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:41.811 17:31:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:41.811 17:31:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:41.811 17:31:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:41.811 17:31:37 -- nvmf/common.sh@104 -- # continue 2 00:28:41.811 17:31:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:41.811 17:31:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:41.811 17:31:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:41.811 17:31:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:41.811 17:31:37 -- nvmf/common.sh@104 -- # continue 2 00:28:41.811 17:31:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:41.811 17:31:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:41.811 17:31:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:41.811 17:31:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:41.811 17:31:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:41.811 17:31:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:41.811 17:31:37 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:41.811 17:31:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:41.811 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:41.811 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:41.811 altname enp217s0f0np0 00:28:41.811 altname ens818f0np0 00:28:41.811 inet 192.168.100.8/24 scope global mlx_0_0 00:28:41.811 valid_lft forever preferred_lft forever 00:28:41.811 17:31:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:41.811 17:31:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:41.811 17:31:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:41.811 17:31:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:41.811 17:31:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:41.811 17:31:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:41.811 17:31:37 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:41.811 17:31:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:41.811 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:41.811 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:41.811 altname enp217s0f1np1 00:28:41.811 altname ens818f1np1 00:28:41.811 inet 192.168.100.9/24 scope global mlx_0_1 00:28:41.811 valid_lft forever preferred_lft forever 00:28:41.811 17:31:37 -- nvmf/common.sh@410 -- # return 0 00:28:41.811 17:31:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:41.811 17:31:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:41.811 17:31:37 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:41.811 17:31:37 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:41.811 17:31:37 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:41.811 17:31:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:41.811 17:31:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:41.811 17:31:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:41.811 17:31:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:41.811 17:31:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:41.811 17:31:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:41.811 17:31:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:41.811 17:31:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:41.811 17:31:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:41.811 17:31:38 -- nvmf/common.sh@104 -- # continue 2 00:28:41.811 17:31:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:41.811 17:31:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:41.811 17:31:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:41.811 17:31:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:41.811 17:31:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:41.811 17:31:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:41.811 17:31:38 -- nvmf/common.sh@104 -- # continue 2 00:28:41.811 17:31:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:41.811 17:31:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:41.811 17:31:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:41.811 17:31:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:41.811 17:31:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:41.811 17:31:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:41.811 17:31:38 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:41.811 17:31:38 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:41.811 17:31:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:41.811 17:31:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:41.811 17:31:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:41.811 17:31:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:41.811 17:31:38 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:41.811 192.168.100.9' 00:28:41.811 17:31:38 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:41.811 192.168.100.9' 00:28:41.811 17:31:38 -- nvmf/common.sh@445 -- # head -n 1 00:28:41.811 17:31:38 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:41.811 17:31:38 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:41.811 192.168.100.9' 00:28:41.811 17:31:38 -- nvmf/common.sh@446 -- # tail -n +2 00:28:41.811 17:31:38 -- nvmf/common.sh@446 -- # head -n 1 00:28:41.811 17:31:38 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:41.811 17:31:38 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:41.811 17:31:38 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:41.811 17:31:38 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:41.811 17:31:38 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:41.811 17:31:38 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:41.811 17:31:38 -- host/bdevperf.sh@25 -- # tgt_init 00:28:41.811 17:31:38 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:41.811 17:31:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:41.811 17:31:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:41.811 17:31:38 -- common/autotest_common.sh@10 -- # set +x 00:28:41.812 17:31:38 -- nvmf/common.sh@469 -- # nvmfpid=1501313 00:28:41.812 17:31:38 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:41.812 17:31:38 -- nvmf/common.sh@470 -- # waitforlisten 1501313 00:28:41.812 17:31:38 -- common/autotest_common.sh@829 -- # '[' -z 1501313 ']' 00:28:41.812 17:31:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.812 17:31:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:41.812 17:31:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.812 17:31:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:41.812 17:31:38 -- common/autotest_common.sh@10 -- # set +x 00:28:41.812 [2024-12-14 17:31:38.152238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:41.812 [2024-12-14 17:31:38.152302] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.812 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.812 [2024-12-14 17:31:38.223511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:41.812 [2024-12-14 17:31:38.261965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:41.812 [2024-12-14 17:31:38.262101] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.812 [2024-12-14 17:31:38.262110] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.812 [2024-12-14 17:31:38.262119] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.812 [2024-12-14 17:31:38.262242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:41.812 [2024-12-14 17:31:38.262328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:41.812 [2024-12-14 17:31:38.262330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.380 17:31:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:42.380 17:31:38 -- common/autotest_common.sh@862 -- # return 0 00:28:42.380 17:31:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:42.380 17:31:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:42.380 17:31:38 -- common/autotest_common.sh@10 -- # set +x 00:28:42.380 17:31:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.380 17:31:39 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:42.380 17:31:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.380 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:28:42.380 [2024-12-14 17:31:39.030343] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x185e900/0x1862db0) succeed. 00:28:42.380 [2024-12-14 17:31:39.039471] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x185fe00/0x18a4450) succeed. 00:28:42.640 17:31:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.640 17:31:39 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:42.640 17:31:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.640 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:28:42.640 Malloc0 00:28:42.640 17:31:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.640 17:31:39 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.640 17:31:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.640 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:28:42.640 17:31:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.640 17:31:39 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:42.640 17:31:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.640 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:28:42.640 17:31:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.640 17:31:39 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:42.640 17:31:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.640 17:31:39 -- common/autotest_common.sh@10 -- # set +x 00:28:42.640 [2024-12-14 17:31:39.185004] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:42.640 17:31:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.640 17:31:39 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:42.640 17:31:39 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:42.640 17:31:39 -- nvmf/common.sh@520 -- # config=() 00:28:42.640 17:31:39 -- nvmf/common.sh@520 -- # local subsystem config 00:28:42.640 17:31:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:42.640 17:31:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:42.640 { 00:28:42.640 "params": { 00:28:42.640 "name": "Nvme$subsystem", 00:28:42.640 "trtype": "$TEST_TRANSPORT", 00:28:42.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.640 "adrfam": "ipv4", 00:28:42.640 "trsvcid": "$NVMF_PORT", 00:28:42.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.640 "hdgst": ${hdgst:-false}, 00:28:42.640 "ddgst": ${ddgst:-false} 00:28:42.640 }, 00:28:42.640 "method": "bdev_nvme_attach_controller" 00:28:42.640 } 00:28:42.640 EOF 00:28:42.640 )") 00:28:42.640 17:31:39 -- nvmf/common.sh@542 -- # cat 00:28:42.640 17:31:39 -- nvmf/common.sh@544 -- # jq . 00:28:42.640 17:31:39 -- nvmf/common.sh@545 -- # IFS=, 00:28:42.640 17:31:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:42.640 "params": { 00:28:42.640 "name": "Nvme1", 00:28:42.640 "trtype": "rdma", 00:28:42.640 "traddr": "192.168.100.8", 00:28:42.640 "adrfam": "ipv4", 00:28:42.640 "trsvcid": "4420", 00:28:42.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:42.640 "hdgst": false, 00:28:42.640 "ddgst": false 00:28:42.640 }, 00:28:42.640 "method": "bdev_nvme_attach_controller" 00:28:42.640 }' 00:28:42.640 [2024-12-14 17:31:39.232978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:42.640 [2024-12-14 17:31:39.233033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501410 ] 00:28:42.640 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.640 [2024-12-14 17:31:39.304973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.900 [2024-12-14 17:31:39.341933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.900 Running I/O for 1 seconds... 00:28:43.837 00:28:43.837 Latency(us) 00:28:43.837 [2024-12-14T16:31:40.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.837 [2024-12-14T16:31:40.521Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:43.837 Verification LBA range: start 0x0 length 0x4000 00:28:43.837 Nvme1n1 : 1.00 25453.75 99.43 0.00 0.00 5005.06 1212.42 11796.48 00:28:43.837 [2024-12-14T16:31:40.521Z] =================================================================================================================== 00:28:43.837 [2024-12-14T16:31:40.521Z] Total : 25453.75 99.43 0.00 0.00 5005.06 1212.42 11796.48 00:28:44.096 17:31:40 -- host/bdevperf.sh@30 -- # bdevperfpid=1501651 00:28:44.096 17:31:40 -- host/bdevperf.sh@32 -- # sleep 3 00:28:44.096 17:31:40 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:44.097 17:31:40 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:44.097 17:31:40 -- nvmf/common.sh@520 -- # config=() 00:28:44.097 17:31:40 -- nvmf/common.sh@520 -- # local subsystem config 00:28:44.097 17:31:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:44.097 17:31:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:44.097 { 00:28:44.097 "params": { 00:28:44.097 "name": "Nvme$subsystem", 00:28:44.097 "trtype": "$TEST_TRANSPORT", 00:28:44.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.097 "adrfam": "ipv4", 00:28:44.097 "trsvcid": "$NVMF_PORT", 00:28:44.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.097 "hdgst": ${hdgst:-false}, 00:28:44.097 "ddgst": ${ddgst:-false} 00:28:44.097 }, 00:28:44.097 "method": "bdev_nvme_attach_controller" 00:28:44.097 } 00:28:44.097 EOF 00:28:44.097 )") 00:28:44.097 17:31:40 -- nvmf/common.sh@542 -- # cat 00:28:44.097 17:31:40 -- nvmf/common.sh@544 -- # jq . 00:28:44.097 17:31:40 -- nvmf/common.sh@545 -- # IFS=, 00:28:44.097 17:31:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:44.097 "params": { 00:28:44.097 "name": "Nvme1", 00:28:44.097 "trtype": "rdma", 00:28:44.097 "traddr": "192.168.100.8", 00:28:44.097 "adrfam": "ipv4", 00:28:44.097 "trsvcid": "4420", 00:28:44.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.097 "hdgst": false, 00:28:44.097 "ddgst": false 00:28:44.097 }, 00:28:44.097 "method": "bdev_nvme_attach_controller" 00:28:44.097 }' 00:28:44.097 [2024-12-14 17:31:40.759162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:44.097 [2024-12-14 17:31:40.759219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501651 ] 00:28:44.356 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.356 [2024-12-14 17:31:40.830295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.356 [2024-12-14 17:31:40.865908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.356 Running I/O for 15 seconds... 00:28:47.646 17:31:43 -- host/bdevperf.sh@33 -- # kill -9 1501313 00:28:47.646 17:31:43 -- host/bdevperf.sh@35 -- # sleep 3 00:28:48.217 [2024-12-14 17:31:44.744480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x184300 00:28:48.217 [2024-12-14 17:31:44.744524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x184300 00:28:48.217 [2024-12-14 17:31:44.744553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x184300 00:28:48.217 [2024-12-14 17:31:44.744572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x184300 00:28:48.217 [2024-12-14 17:31:44.744590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.217 [2024-12-14 17:31:44.744609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x184300 00:28:48.217 [2024-12-14 17:31:44.744627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.217 [2024-12-14 17:31:44.744644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.217 [2024-12-14 17:31:44.744662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183e00 00:28:48.217 [2024-12-14 17:31:44.744680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.217 [2024-12-14 17:31:44.744697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x184300 00:28:48.217 [2024-12-14 17:31:44.744720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x184300 00:28:48.217 [2024-12-14 17:31:44.744738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.217 [2024-12-14 17:31:44.744756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x184300 00:28:48.217 [2024-12-14 17:31:44.744774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x184300 00:28:48.217 [2024-12-14 17:31:44.744792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.217 [2024-12-14 17:31:44.744810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183e00 00:28:48.217 [2024-12-14 17:31:44.744828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183e00 00:28:48.217 [2024-12-14 17:31:44.744846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x184300 00:28:48.217 [2024-12-14 17:31:44.744864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183e00 00:28:48.217 [2024-12-14 17:31:44.744881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.217 [2024-12-14 17:31:44.744899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.217 [2024-12-14 17:31:44.744917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183e00 00:28:48.217 [2024-12-14 17:31:44.744936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x184300 00:28:48.217 [2024-12-14 17:31:44.744954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.217 [2024-12-14 17:31:44.744973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.744982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.217 [2024-12-14 17:31:44.744990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.217 [2024-12-14 17:31:44.745000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x184300 00:28:48.217 [2024-12-14 17:31:44.745008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f5780 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183e00 00:28:48.218 [2024-12-14 17:31:44.745061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183e00 00:28:48.218 [2024-12-14 17:31:44.745079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f1580 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f0500 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183e00 00:28:48.218 [2024-12-14 17:31:44.745136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.218 [2024-12-14 17:31:44.745171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.218 [2024-12-14 17:31:44.745188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183e00 00:28:48.218 [2024-12-14 17:31:44.745206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183e00 00:28:48.218 [2024-12-14 17:31:44.745293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.218 [2024-12-14 17:31:44.745311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183e00 00:28:48.218 [2024-12-14 17:31:44.745328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e2e80 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.218 [2024-12-14 17:31:44.745383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183e00 00:28:48.218 [2024-12-14 17:31:44.745420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.218 [2024-12-14 17:31:44.745439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183e00 00:28:48.218 [2024-12-14 17:31:44.745457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.218 [2024-12-14 17:31:44.745491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.218 [2024-12-14 17:31:44.745513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183e00 00:28:48.218 [2024-12-14 17:31:44.745569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.218 [2024-12-14 17:31:44.745607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.218 [2024-12-14 17:31:44.745625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.218 [2024-12-14 17:31:44.745644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.218 [2024-12-14 17:31:44.745662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183e00 00:28:48.218 [2024-12-14 17:31:44.745681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x184300 00:28:48.218 [2024-12-14 17:31:44.745700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.218 [2024-12-14 17:31:44.745710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.219 [2024-12-14 17:31:44.745719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.219 [2024-12-14 17:31:44.745738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cd400 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.745757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.745775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.219 [2024-12-14 17:31:44.745795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.219 [2024-12-14 17:31:44.745814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.745832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.745851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.745870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.745888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.219 [2024-12-14 17:31:44.745906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.219 [2024-12-14 17:31:44.745925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c2f00 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.745943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.745961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.745980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.745990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.746000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.746019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.746038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.746056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.746075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.746094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.746112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.746131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.746149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.219 [2024-12-14 17:31:44.746168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.746186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.219 [2024-12-14 17:31:44.746204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.746224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.746243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.746261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.746280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.746299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.746319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.746338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.746357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x184300 00:28:48.219 [2024-12-14 17:31:44.746375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183e00 00:28:48.219 [2024-12-14 17:31:44.746394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.219 [2024-12-14 17:31:44.746404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.219 [2024-12-14 17:31:44.746413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x184300 00:28:48.220 [2024-12-14 17:31:44.746433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183e00 00:28:48.220 [2024-12-14 17:31:44.746452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.220 [2024-12-14 17:31:44.746470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x184300 00:28:48.220 [2024-12-14 17:31:44.746488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183e00 00:28:48.220 [2024-12-14 17:31:44.746512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.220 [2024-12-14 17:31:44.746530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x184300 00:28:48.220 [2024-12-14 17:31:44.746548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183e00 00:28:48.220 [2024-12-14 17:31:44.746567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.220 [2024-12-14 17:31:44.746585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x184300 00:28:48.220 [2024-12-14 17:31:44.746604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183e00 00:28:48.220 [2024-12-14 17:31:44.746624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.220 [2024-12-14 17:31:44.746642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183e00 00:28:48.220 [2024-12-14 17:31:44.746662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183e00 00:28:48.220 [2024-12-14 17:31:44.746682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.220 [2024-12-14 17:31:44.746700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x184300 00:28:48.220 [2024-12-14 17:31:44.746719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.220 [2024-12-14 17:31:44.746737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183e00 00:28:48.220 [2024-12-14 17:31:44.746756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.220 [2024-12-14 17:31:44.746774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.220 [2024-12-14 17:31:44.746792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183e00 00:28:48.220 [2024-12-14 17:31:44.746811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x184300 00:28:48.220 [2024-12-14 17:31:44.746829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183e00 00:28:48.220 [2024-12-14 17:31:44.746848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x184300 00:28:48.220 [2024-12-14 17:31:44.746868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.746878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.220 [2024-12-14 17:31:44.746886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:64963 cdw0:d4aef000 sqhd:18ba p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.758552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:48.220 [2024-12-14 17:31:44.758569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:48.220 [2024-12-14 17:31:44.758580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23848 len:8 PRP1 0x0 PRP2 0x0 00:28:48.220 [2024-12-14 17:31:44.758592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.758635] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:48.220 [2024-12-14 17:31:44.758669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.220 [2024-12-14 17:31:44.758681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64963 cdw0:0 sqhd:390c p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.758693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.220 [2024-12-14 17:31:44.758705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64963 cdw0:0 sqhd:390c p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.758716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.220 [2024-12-14 17:31:44.758727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64963 cdw0:0 sqhd:390c p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.758739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.220 [2024-12-14 17:31:44.758750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64963 cdw0:0 sqhd:390c p:1 m:0 dnr:0 00:28:48.220 [2024-12-14 17:31:44.776591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:48.220 [2024-12-14 17:31:44.776646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.220 [2024-12-14 17:31:44.776678] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:48.220 [2024-12-14 17:31:44.778725] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.220 [2024-12-14 17:31:44.781083] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:48.220 [2024-12-14 17:31:44.781102] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:48.220 [2024-12-14 17:31:44.781114] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:49.156 [2024-12-14 17:31:45.785029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:49.157 [2024-12-14 17:31:45.785079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.157 [2024-12-14 17:31:45.785220] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.157 [2024-12-14 17:31:45.785231] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.157 [2024-12-14 17:31:45.785243] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:49.157 [2024-12-14 17:31:45.785724] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:49.157 [2024-12-14 17:31:45.786743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.157 [2024-12-14 17:31:45.797573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.157 [2024-12-14 17:31:45.800099] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:49.157 [2024-12-14 17:31:45.800154] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:49.157 [2024-12-14 17:31:45.800180] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:50.094 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1501313 Killed "${NVMF_APP[@]}" "$@" 00:28:50.094 17:31:46 -- host/bdevperf.sh@36 -- # tgt_init 00:28:50.094 17:31:46 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:50.094 17:31:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:50.094 17:31:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:50.094 17:31:46 -- common/autotest_common.sh@10 -- # set +x 00:28:50.094 17:31:46 -- nvmf/common.sh@469 -- # nvmfpid=1502711 00:28:50.094 17:31:46 -- nvmf/common.sh@470 -- # waitforlisten 1502711 00:28:50.094 17:31:46 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:50.094 17:31:46 -- common/autotest_common.sh@829 -- # '[' -z 1502711 ']' 00:28:50.094 17:31:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.094 17:31:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:50.094 17:31:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.094 17:31:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:50.094 17:31:46 -- common/autotest_common.sh@10 -- # set +x 00:28:50.353 [2024-12-14 17:31:46.779306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:50.353 [2024-12-14 17:31:46.779355] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.353 [2024-12-14 17:31:46.804163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:50.353 [2024-12-14 17:31:46.804184] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.353 [2024-12-14 17:31:46.804296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.353 [2024-12-14 17:31:46.804307] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.353 [2024-12-14 17:31:46.804316] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:50.353 [2024-12-14 17:31:46.804878] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:50.353 [2024-12-14 17:31:46.805988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.353 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.353 [2024-12-14 17:31:46.816805] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.353 [2024-12-14 17:31:46.818812] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:50.353 [2024-12-14 17:31:46.818832] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:50.353 [2024-12-14 17:31:46.818839] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:50.353 [2024-12-14 17:31:46.850633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:50.353 [2024-12-14 17:31:46.886555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:50.353 [2024-12-14 17:31:46.886685] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.353 [2024-12-14 17:31:46.886695] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.353 [2024-12-14 17:31:46.886704] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.353 [2024-12-14 17:31:46.886748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.353 [2024-12-14 17:31:46.886838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.353 [2024-12-14 17:31:46.886840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.921 17:31:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:50.921 17:31:47 -- common/autotest_common.sh@862 -- # return 0 00:28:50.921 17:31:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:50.921 17:31:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:50.921 17:31:47 -- common/autotest_common.sh@10 -- # set +x 00:28:51.180 17:31:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.180 17:31:47 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:51.180 17:31:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.180 17:31:47 -- common/autotest_common.sh@10 -- # set +x 00:28:51.180 [2024-12-14 17:31:47.672089] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x92c900/0x930db0) succeed. 00:28:51.180 [2024-12-14 17:31:47.681178] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x92de00/0x972450) succeed. 00:28:51.180 17:31:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.180 17:31:47 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:51.180 17:31:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.180 17:31:47 -- common/autotest_common.sh@10 -- # set +x 00:28:51.180 Malloc0 00:28:51.180 17:31:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.180 17:31:47 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.180 17:31:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.180 17:31:47 -- common/autotest_common.sh@10 -- # set +x 00:28:51.180 17:31:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.180 17:31:47 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:51.180 17:31:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.180 17:31:47 -- common/autotest_common.sh@10 -- # set +x 00:28:51.180 [2024-12-14 17:31:47.822868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.180 [2024-12-14 17:31:47.822896] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.180 [2024-12-14 17:31:47.823040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.180 [2024-12-14 17:31:47.823050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.180 [2024-12-14 17:31:47.823060] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:51.180 [2024-12-14 17:31:47.824529] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.180 [2024-12-14 17:31:47.824843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.180 17:31:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.180 17:31:47 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:51.180 17:31:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.180 17:31:47 -- common/autotest_common.sh@10 -- # set +x 00:28:51.180 [2024-12-14 17:31:47.832119] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:51.180 17:31:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.180 [2024-12-14 17:31:47.836635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.180 17:31:47 -- host/bdevperf.sh@38 -- # wait 1501651 00:28:51.439 [2024-12-14 17:31:47.873913] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:59.654 00:28:59.654 Latency(us) 00:28:59.654 [2024-12-14T16:31:56.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.654 [2024-12-14T16:31:56.338Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:59.654 Verification LBA range: start 0x0 length 0x4000 00:28:59.654 Nvme1n1 : 15.00 18597.74 72.65 16692.59 0.00 3616.63 445.64 1060320.05 00:28:59.654 [2024-12-14T16:31:56.338Z] =================================================================================================================== 00:28:59.654 [2024-12-14T16:31:56.338Z] Total : 18597.74 72.65 16692.59 0.00 3616.63 445.64 1060320.05 00:28:59.654 17:31:56 -- host/bdevperf.sh@39 -- # sync 00:28:59.654 17:31:56 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.654 17:31:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.654 17:31:56 -- common/autotest_common.sh@10 -- # set +x 00:28:59.654 17:31:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.654 17:31:56 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:59.654 17:31:56 -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:59.654 17:31:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:59.654 17:31:56 -- nvmf/common.sh@116 -- # sync 00:28:59.654 17:31:56 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:59.654 17:31:56 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:59.654 17:31:56 -- nvmf/common.sh@119 -- # set +e 00:28:59.654 17:31:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:59.654 17:31:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:59.654 rmmod nvme_rdma 00:28:59.654 rmmod nvme_fabrics 00:28:59.654 17:31:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:59.654 17:31:56 -- nvmf/common.sh@123 -- # set -e 00:28:59.654 17:31:56 -- nvmf/common.sh@124 -- # return 0 00:28:59.654 17:31:56 -- nvmf/common.sh@477 -- # '[' -n 1502711 ']' 00:28:59.654 17:31:56 -- nvmf/common.sh@478 -- # killprocess 1502711 00:28:59.654 17:31:56 -- common/autotest_common.sh@936 -- # '[' -z 1502711 ']' 00:28:59.654 17:31:56 -- common/autotest_common.sh@940 -- # kill -0 1502711 00:28:59.654 17:31:56 -- common/autotest_common.sh@941 -- # uname 00:28:59.654 17:31:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:59.654 17:31:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1502711 00:28:59.914 17:31:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:59.914 17:31:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:59.914 17:31:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1502711' 00:28:59.914 killing process with pid 1502711 00:28:59.914 17:31:56 -- common/autotest_common.sh@955 -- # kill 1502711 00:28:59.914 17:31:56 -- common/autotest_common.sh@960 -- # wait 1502711 00:29:00.173 17:31:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:00.173 17:31:56 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:00.173 00:29:00.173 real 0m25.035s 00:29:00.173 user 1m3.997s 00:29:00.173 sys 0m6.071s 00:29:00.173 17:31:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:00.173 17:31:56 -- common/autotest_common.sh@10 -- # set +x 00:29:00.173 ************************************ 00:29:00.173 END TEST nvmf_bdevperf 00:29:00.173 ************************************ 00:29:00.173 17:31:56 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:29:00.173 17:31:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:00.173 17:31:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:00.173 17:31:56 -- common/autotest_common.sh@10 -- # set +x 00:29:00.173 ************************************ 00:29:00.173 START TEST nvmf_target_disconnect 00:29:00.173 ************************************ 00:29:00.173 17:31:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:29:00.173 * Looking for test storage... 00:29:00.173 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:00.173 17:31:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:00.173 17:31:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:00.173 17:31:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:00.433 17:31:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:00.433 17:31:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:00.433 17:31:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:00.433 17:31:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:00.433 17:31:56 -- scripts/common.sh@335 -- # IFS=.-: 00:29:00.433 17:31:56 -- scripts/common.sh@335 -- # read -ra ver1 00:29:00.433 17:31:56 -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.433 17:31:56 -- scripts/common.sh@336 -- # read -ra ver2 00:29:00.433 17:31:56 -- scripts/common.sh@337 -- # local 'op=<' 00:29:00.433 17:31:56 -- scripts/common.sh@339 -- # ver1_l=2 00:29:00.433 17:31:56 -- scripts/common.sh@340 -- # ver2_l=1 00:29:00.433 17:31:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:00.433 17:31:56 -- scripts/common.sh@343 -- # case "$op" in 00:29:00.433 17:31:56 -- scripts/common.sh@344 -- # : 1 00:29:00.433 17:31:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:00.433 17:31:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.433 17:31:56 -- scripts/common.sh@364 -- # decimal 1 00:29:00.434 17:31:56 -- scripts/common.sh@352 -- # local d=1 00:29:00.434 17:31:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.434 17:31:56 -- scripts/common.sh@354 -- # echo 1 00:29:00.434 17:31:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:00.434 17:31:56 -- scripts/common.sh@365 -- # decimal 2 00:29:00.434 17:31:56 -- scripts/common.sh@352 -- # local d=2 00:29:00.434 17:31:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.434 17:31:56 -- scripts/common.sh@354 -- # echo 2 00:29:00.434 17:31:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:00.434 17:31:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:00.434 17:31:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:00.434 17:31:56 -- scripts/common.sh@367 -- # return 0 00:29:00.434 17:31:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.434 17:31:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:00.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.434 --rc genhtml_branch_coverage=1 00:29:00.434 --rc genhtml_function_coverage=1 00:29:00.434 --rc genhtml_legend=1 00:29:00.434 --rc geninfo_all_blocks=1 00:29:00.434 --rc geninfo_unexecuted_blocks=1 00:29:00.434 00:29:00.434 ' 00:29:00.434 17:31:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:00.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.434 --rc genhtml_branch_coverage=1 00:29:00.434 --rc genhtml_function_coverage=1 00:29:00.434 --rc genhtml_legend=1 00:29:00.434 --rc geninfo_all_blocks=1 00:29:00.434 --rc geninfo_unexecuted_blocks=1 00:29:00.434 00:29:00.434 ' 00:29:00.434 17:31:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:00.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.434 --rc genhtml_branch_coverage=1 00:29:00.434 --rc genhtml_function_coverage=1 00:29:00.434 --rc genhtml_legend=1 00:29:00.434 --rc geninfo_all_blocks=1 00:29:00.434 --rc geninfo_unexecuted_blocks=1 00:29:00.434 00:29:00.434 ' 00:29:00.434 17:31:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:00.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.434 --rc genhtml_branch_coverage=1 00:29:00.434 --rc genhtml_function_coverage=1 00:29:00.434 --rc genhtml_legend=1 00:29:00.434 --rc geninfo_all_blocks=1 00:29:00.434 --rc geninfo_unexecuted_blocks=1 00:29:00.434 00:29:00.434 ' 00:29:00.434 17:31:56 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.434 17:31:56 -- nvmf/common.sh@7 -- # uname -s 00:29:00.434 17:31:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.434 17:31:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.434 17:31:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.434 17:31:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.434 17:31:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.434 17:31:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.434 17:31:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.434 17:31:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.434 17:31:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.434 17:31:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.434 17:31:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:00.434 17:31:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:00.434 17:31:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.434 17:31:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.434 17:31:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.434 17:31:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:00.434 17:31:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.434 17:31:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.434 17:31:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.434 17:31:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.434 17:31:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.434 17:31:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.434 17:31:56 -- paths/export.sh@5 -- # export PATH 00:29:00.434 17:31:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.434 17:31:56 -- nvmf/common.sh@46 -- # : 0 00:29:00.434 17:31:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:00.434 17:31:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:00.434 17:31:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:00.434 17:31:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.434 17:31:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.434 17:31:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:00.434 17:31:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:00.434 17:31:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:00.434 17:31:56 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:29:00.434 17:31:56 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:00.434 17:31:56 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:00.434 17:31:56 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:00.434 17:31:56 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:00.434 17:31:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.434 17:31:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:00.434 17:31:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:00.434 17:31:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:00.434 17:31:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.434 17:31:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:00.434 17:31:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.434 17:31:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:00.434 17:31:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:00.434 17:31:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:00.434 17:31:56 -- common/autotest_common.sh@10 -- # set +x 00:29:07.006 17:32:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:07.006 17:32:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:07.006 17:32:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:07.006 17:32:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:07.006 17:32:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:07.006 17:32:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:07.006 17:32:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:07.006 17:32:03 -- nvmf/common.sh@294 -- # net_devs=() 00:29:07.006 17:32:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:07.006 17:32:03 -- nvmf/common.sh@295 -- # e810=() 00:29:07.006 17:32:03 -- nvmf/common.sh@295 -- # local -ga e810 00:29:07.006 17:32:03 -- nvmf/common.sh@296 -- # x722=() 00:29:07.006 17:32:03 -- nvmf/common.sh@296 -- # local -ga x722 00:29:07.006 17:32:03 -- nvmf/common.sh@297 -- # mlx=() 00:29:07.006 17:32:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:07.006 17:32:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.006 17:32:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.006 17:32:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.006 17:32:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.006 17:32:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.006 17:32:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.006 17:32:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.006 17:32:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.006 17:32:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.006 17:32:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.006 17:32:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.006 17:32:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:07.006 17:32:03 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:07.006 17:32:03 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:07.006 17:32:03 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:07.006 17:32:03 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:07.006 17:32:03 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:07.006 17:32:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:07.006 17:32:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:07.006 17:32:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:07.006 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:07.006 17:32:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:07.006 17:32:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:07.006 17:32:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:07.006 17:32:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:07.006 17:32:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:07.006 17:32:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:07.006 17:32:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:07.006 17:32:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:07.007 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:07.007 17:32:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:07.007 17:32:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:07.007 17:32:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:07.007 17:32:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:07.007 17:32:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:07.007 17:32:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:07.007 17:32:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:07.007 17:32:03 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:07.007 17:32:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:07.007 17:32:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.007 17:32:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:07.007 17:32:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.007 17:32:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:07.007 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:07.007 17:32:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.007 17:32:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:07.007 17:32:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.007 17:32:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:07.007 17:32:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.007 17:32:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:07.007 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:07.007 17:32:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.007 17:32:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:07.007 17:32:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:07.007 17:32:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:07.007 17:32:03 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:07.007 17:32:03 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:07.007 17:32:03 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:07.007 17:32:03 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:07.007 17:32:03 -- nvmf/common.sh@57 -- # uname 00:29:07.007 17:32:03 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:07.007 17:32:03 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:07.007 17:32:03 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:07.007 17:32:03 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:07.007 17:32:03 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:07.007 17:32:03 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:07.007 17:32:03 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:07.007 17:32:03 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:07.007 17:32:03 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:07.007 17:32:03 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:07.007 17:32:03 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:07.007 17:32:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:07.007 17:32:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:07.007 17:32:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:07.007 17:32:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:07.007 17:32:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:07.007 17:32:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:07.007 17:32:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.007 17:32:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:07.007 17:32:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:07.007 17:32:03 -- nvmf/common.sh@104 -- # continue 2 00:29:07.007 17:32:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:07.007 17:32:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.007 17:32:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:07.007 17:32:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.007 17:32:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:07.007 17:32:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:07.007 17:32:03 -- nvmf/common.sh@104 -- # continue 2 00:29:07.007 17:32:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:07.007 17:32:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:07.007 17:32:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:07.007 17:32:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:07.007 17:32:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:07.007 17:32:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:07.266 17:32:03 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:07.266 17:32:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:07.266 17:32:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:07.266 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:07.266 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:07.266 altname enp217s0f0np0 00:29:07.266 altname ens818f0np0 00:29:07.266 inet 192.168.100.8/24 scope global mlx_0_0 00:29:07.266 valid_lft forever preferred_lft forever 00:29:07.266 17:32:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:07.266 17:32:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:07.266 17:32:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:07.266 17:32:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:07.266 17:32:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:07.266 17:32:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:07.266 17:32:03 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:07.266 17:32:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:07.266 17:32:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:07.266 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:07.266 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:07.266 altname enp217s0f1np1 00:29:07.266 altname ens818f1np1 00:29:07.266 inet 192.168.100.9/24 scope global mlx_0_1 00:29:07.266 valid_lft forever preferred_lft forever 00:29:07.266 17:32:03 -- nvmf/common.sh@410 -- # return 0 00:29:07.266 17:32:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:07.266 17:32:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:07.266 17:32:03 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:07.266 17:32:03 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:07.266 17:32:03 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:07.266 17:32:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:07.266 17:32:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:07.266 17:32:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:07.266 17:32:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:07.266 17:32:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:07.267 17:32:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:07.267 17:32:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.267 17:32:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:07.267 17:32:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:07.267 17:32:03 -- nvmf/common.sh@104 -- # continue 2 00:29:07.267 17:32:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:07.267 17:32:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.267 17:32:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:07.267 17:32:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.267 17:32:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:07.267 17:32:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:07.267 17:32:03 -- nvmf/common.sh@104 -- # continue 2 00:29:07.267 17:32:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:07.267 17:32:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:07.267 17:32:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:07.267 17:32:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:07.267 17:32:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:07.267 17:32:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:07.267 17:32:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:07.267 17:32:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:07.267 17:32:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:07.267 17:32:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:07.267 17:32:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:07.267 17:32:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:07.267 17:32:03 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:07.267 192.168.100.9' 00:29:07.267 17:32:03 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:07.267 192.168.100.9' 00:29:07.267 17:32:03 -- nvmf/common.sh@445 -- # head -n 1 00:29:07.267 17:32:03 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:07.267 17:32:03 -- nvmf/common.sh@446 -- # head -n 1 00:29:07.267 17:32:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:07.267 192.168.100.9' 00:29:07.267 17:32:03 -- nvmf/common.sh@446 -- # tail -n +2 00:29:07.267 17:32:03 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:07.267 17:32:03 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:07.267 17:32:03 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:07.267 17:32:03 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:07.267 17:32:03 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:07.267 17:32:03 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:07.267 17:32:03 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:07.267 17:32:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:07.267 17:32:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:07.267 17:32:03 -- common/autotest_common.sh@10 -- # set +x 00:29:07.267 ************************************ 00:29:07.267 START TEST nvmf_target_disconnect_tc1 00:29:07.267 ************************************ 00:29:07.267 17:32:03 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc1 00:29:07.267 17:32:03 -- host/target_disconnect.sh@32 -- # set +e 00:29:07.267 17:32:03 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:07.267 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.267 [2024-12-14 17:32:03.945350] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:07.267 [2024-12-14 17:32:03.945468] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:07.267 [2024-12-14 17:32:03.945527] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d70c0 00:29:08.645 [2024-12-14 17:32:04.949400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:08.645 [2024-12-14 17:32:04.949461] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:08.645 [2024-12-14 17:32:04.949495] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:29:08.645 [2024-12-14 17:32:04.949564] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:08.645 [2024-12-14 17:32:04.949592] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:08.645 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:29:08.645 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:08.645 Initializing NVMe Controllers 00:29:08.645 17:32:04 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:08.645 17:32:04 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:08.645 17:32:04 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:29:08.645 17:32:04 -- common/autotest_common.sh@1142 -- # return 0 00:29:08.645 17:32:04 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:08.645 17:32:04 -- host/target_disconnect.sh@41 -- # set -e 00:29:08.645 00:29:08.645 real 0m1.127s 00:29:08.645 user 0m0.843s 00:29:08.645 sys 0m0.273s 00:29:08.645 17:32:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:08.645 17:32:04 -- common/autotest_common.sh@10 -- # set +x 00:29:08.645 ************************************ 00:29:08.645 END TEST nvmf_target_disconnect_tc1 00:29:08.645 ************************************ 00:29:08.645 17:32:05 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:08.645 17:32:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:08.645 17:32:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:08.645 17:32:05 -- common/autotest_common.sh@10 -- # set +x 00:29:08.645 ************************************ 00:29:08.645 START TEST nvmf_target_disconnect_tc2 00:29:08.645 ************************************ 00:29:08.645 17:32:05 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc2 00:29:08.645 17:32:05 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:29:08.645 17:32:05 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:08.645 17:32:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:08.645 17:32:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:08.645 17:32:05 -- common/autotest_common.sh@10 -- # set +x 00:29:08.645 17:32:05 -- nvmf/common.sh@469 -- # nvmfpid=1507873 00:29:08.645 17:32:05 -- nvmf/common.sh@470 -- # waitforlisten 1507873 00:29:08.645 17:32:05 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:08.645 17:32:05 -- common/autotest_common.sh@829 -- # '[' -z 1507873 ']' 00:29:08.645 17:32:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.645 17:32:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:08.645 17:32:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.645 17:32:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:08.645 17:32:05 -- common/autotest_common.sh@10 -- # set +x 00:29:08.645 [2024-12-14 17:32:05.060755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:08.645 [2024-12-14 17:32:05.060804] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.645 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.645 [2024-12-14 17:32:05.144910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:08.645 [2024-12-14 17:32:05.182118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:08.645 [2024-12-14 17:32:05.182242] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.645 [2024-12-14 17:32:05.182252] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.645 [2024-12-14 17:32:05.182264] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.645 [2024-12-14 17:32:05.182401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:08.645 [2024-12-14 17:32:05.182452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:08.645 [2024-12-14 17:32:05.182538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:08.645 [2024-12-14 17:32:05.182539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:09.212 17:32:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:09.212 17:32:05 -- common/autotest_common.sh@862 -- # return 0 00:29:09.212 17:32:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:09.212 17:32:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:09.212 17:32:05 -- common/autotest_common.sh@10 -- # set +x 00:29:09.471 17:32:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.471 17:32:05 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:09.471 17:32:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.471 17:32:05 -- common/autotest_common.sh@10 -- # set +x 00:29:09.471 Malloc0 00:29:09.471 17:32:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.471 17:32:05 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:09.471 17:32:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.471 17:32:05 -- common/autotest_common.sh@10 -- # set +x 00:29:09.471 [2024-12-14 17:32:05.973278] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21c0ab0/0x21cc580) succeed. 00:29:09.471 [2024-12-14 17:32:05.982633] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21c2050/0x220dc20) succeed. 00:29:09.471 17:32:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.471 17:32:06 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:09.471 17:32:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.471 17:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:09.471 17:32:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.471 17:32:06 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:09.471 17:32:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.471 17:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:09.471 17:32:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.471 17:32:06 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:09.471 17:32:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.471 17:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:09.471 [2024-12-14 17:32:06.120641] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:09.471 17:32:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.471 17:32:06 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:09.471 17:32:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.471 17:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:09.471 17:32:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.471 17:32:06 -- host/target_disconnect.sh@50 -- # reconnectpid=1508109 00:29:09.471 17:32:06 -- host/target_disconnect.sh@52 -- # sleep 2 00:29:09.471 17:32:06 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:09.731 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.638 17:32:08 -- host/target_disconnect.sh@53 -- # kill -9 1507873 00:29:11.638 17:32:08 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:13.015 Read completed with error (sct=0, sc=8) 00:29:13.015 starting I/O failed 00:29:13.015 Write completed with error (sct=0, sc=8) 00:29:13.015 starting I/O failed 00:29:13.015 Write completed with error (sct=0, sc=8) 00:29:13.015 starting I/O failed 00:29:13.015 Write completed with error (sct=0, sc=8) 00:29:13.015 starting I/O failed 00:29:13.015 Read completed with error (sct=0, sc=8) 00:29:13.015 starting I/O failed 00:29:13.015 Write completed with error (sct=0, sc=8) 00:29:13.015 starting I/O failed 00:29:13.015 Read completed with error (sct=0, sc=8) 00:29:13.015 starting I/O failed 00:29:13.015 Read completed with error (sct=0, sc=8) 00:29:13.015 starting I/O failed 00:29:13.016 Read completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Read completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Read completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Read completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Read completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Read completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Read completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Read completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Read completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Write completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 Read completed with error (sct=0, sc=8) 00:29:13.016 starting I/O failed 00:29:13.016 [2024-12-14 17:32:09.314804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.583 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1507873 Killed "${NVMF_APP[@]}" "$@" 00:29:13.583 17:32:10 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:29:13.583 17:32:10 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:13.583 17:32:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:13.583 17:32:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:13.583 17:32:10 -- common/autotest_common.sh@10 -- # set +x 00:29:13.583 17:32:10 -- nvmf/common.sh@469 -- # nvmfpid=1508831 00:29:13.583 17:32:10 -- nvmf/common.sh@470 -- # waitforlisten 1508831 00:29:13.583 17:32:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:13.583 17:32:10 -- common/autotest_common.sh@829 -- # '[' -z 1508831 ']' 00:29:13.583 17:32:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.583 17:32:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:13.583 17:32:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.583 17:32:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:13.583 17:32:10 -- common/autotest_common.sh@10 -- # set +x 00:29:13.584 [2024-12-14 17:32:10.195862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:13.584 [2024-12-14 17:32:10.195921] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.584 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.843 [2024-12-14 17:32:10.283494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Write completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 Read completed with error (sct=0, sc=8) 00:29:13.843 starting I/O failed 00:29:13.843 [2024-12-14 17:32:10.319925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.843 [2024-12-14 17:32:10.322148] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:13.843 [2024-12-14 17:32:10.322247] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.843 [2024-12-14 17:32:10.322257] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.843 [2024-12-14 17:32:10.322266] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.843 [2024-12-14 17:32:10.322379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:13.843 [2024-12-14 17:32:10.322489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:13.843 [2024-12-14 17:32:10.322536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:13.843 [2024-12-14 17:32:10.322538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:14.412 17:32:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:14.412 17:32:11 -- common/autotest_common.sh@862 -- # return 0 00:29:14.412 17:32:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:14.412 17:32:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:14.412 17:32:11 -- common/autotest_common.sh@10 -- # set +x 00:29:14.412 17:32:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.412 17:32:11 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:14.412 17:32:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.412 17:32:11 -- common/autotest_common.sh@10 -- # set +x 00:29:14.412 Malloc0 00:29:14.412 17:32:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.412 17:32:11 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:14.412 17:32:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.412 17:32:11 -- common/autotest_common.sh@10 -- # set +x 00:29:14.672 [2024-12-14 17:32:11.105583] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x236aab0/0x2376580) succeed. 00:29:14.672 [2024-12-14 17:32:11.114997] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x236c050/0x23b7c20) succeed. 00:29:14.672 17:32:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.672 17:32:11 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:14.672 17:32:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.672 17:32:11 -- common/autotest_common.sh@10 -- # set +x 00:29:14.672 17:32:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.672 17:32:11 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:14.672 17:32:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.672 17:32:11 -- common/autotest_common.sh@10 -- # set +x 00:29:14.672 17:32:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.672 17:32:11 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:14.672 17:32:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.672 17:32:11 -- common/autotest_common.sh@10 -- # set +x 00:29:14.672 [2024-12-14 17:32:11.255072] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:14.672 17:32:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.672 17:32:11 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:14.672 17:32:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.672 17:32:11 -- common/autotest_common.sh@10 -- # set +x 00:29:14.672 17:32:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.672 17:32:11 -- host/target_disconnect.sh@58 -- # wait 1508109 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Read completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 Write completed with error (sct=0, sc=8) 00:29:14.672 starting I/O failed 00:29:14.672 [2024-12-14 17:32:11.325109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.672 [2024-12-14 17:32:11.328935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.672 [2024-12-14 17:32:11.328994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.672 [2024-12-14 17:32:11.329015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.672 [2024-12-14 17:32:11.329026] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.672 [2024-12-14 17:32:11.329043] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.672 [2024-12-14 17:32:11.339613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-12-14 17:32:11.349111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.672 [2024-12-14 17:32:11.349158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.672 [2024-12-14 17:32:11.349181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.672 [2024-12-14 17:32:11.349191] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.672 [2024-12-14 17:32:11.349200] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.932 [2024-12-14 17:32:11.359695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-12-14 17:32:11.368971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.932 [2024-12-14 17:32:11.369013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.932 [2024-12-14 17:32:11.369030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.932 [2024-12-14 17:32:11.369039] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.932 [2024-12-14 17:32:11.369048] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.932 [2024-12-14 17:32:11.379569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-12-14 17:32:11.389237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.932 [2024-12-14 17:32:11.389279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.932 [2024-12-14 17:32:11.389296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.932 [2024-12-14 17:32:11.389305] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.932 [2024-12-14 17:32:11.389313] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.932 [2024-12-14 17:32:11.399735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-12-14 17:32:11.409098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.932 [2024-12-14 17:32:11.409141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.932 [2024-12-14 17:32:11.409158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.932 [2024-12-14 17:32:11.409168] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.932 [2024-12-14 17:32:11.409177] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.932 [2024-12-14 17:32:11.419636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-12-14 17:32:11.429285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.932 [2024-12-14 17:32:11.429321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.932 [2024-12-14 17:32:11.429338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.932 [2024-12-14 17:32:11.429347] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.932 [2024-12-14 17:32:11.429358] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.932 [2024-12-14 17:32:11.439870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-12-14 17:32:11.449262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.933 [2024-12-14 17:32:11.449302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.933 [2024-12-14 17:32:11.449319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.933 [2024-12-14 17:32:11.449329] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.933 [2024-12-14 17:32:11.449337] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.933 [2024-12-14 17:32:11.459801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-12-14 17:32:11.469302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.933 [2024-12-14 17:32:11.469344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.933 [2024-12-14 17:32:11.469361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.933 [2024-12-14 17:32:11.469370] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.933 [2024-12-14 17:32:11.469378] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.933 [2024-12-14 17:32:11.479667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-12-14 17:32:11.489447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.933 [2024-12-14 17:32:11.489485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.933 [2024-12-14 17:32:11.489514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.933 [2024-12-14 17:32:11.489523] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.933 [2024-12-14 17:32:11.489531] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.933 [2024-12-14 17:32:11.499927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-12-14 17:32:11.509399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.933 [2024-12-14 17:32:11.509435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.933 [2024-12-14 17:32:11.509451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.933 [2024-12-14 17:32:11.509460] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.933 [2024-12-14 17:32:11.509469] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.933 [2024-12-14 17:32:11.520081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-12-14 17:32:11.529474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.933 [2024-12-14 17:32:11.529522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.933 [2024-12-14 17:32:11.529538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.933 [2024-12-14 17:32:11.529548] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.933 [2024-12-14 17:32:11.529557] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.933 [2024-12-14 17:32:11.540118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-12-14 17:32:11.549575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.933 [2024-12-14 17:32:11.549614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.933 [2024-12-14 17:32:11.549630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.933 [2024-12-14 17:32:11.549639] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.933 [2024-12-14 17:32:11.549648] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.933 [2024-12-14 17:32:11.559933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-12-14 17:32:11.569567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.933 [2024-12-14 17:32:11.569606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.933 [2024-12-14 17:32:11.569622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.933 [2024-12-14 17:32:11.569631] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.933 [2024-12-14 17:32:11.569640] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.933 [2024-12-14 17:32:11.579846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-12-14 17:32:11.589732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.933 [2024-12-14 17:32:11.589773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.933 [2024-12-14 17:32:11.589790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.933 [2024-12-14 17:32:11.589799] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.933 [2024-12-14 17:32:11.589807] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.933 [2024-12-14 17:32:11.600173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-12-14 17:32:11.609646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.933 [2024-12-14 17:32:11.609687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.933 [2024-12-14 17:32:11.609703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.933 [2024-12-14 17:32:11.609716] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.933 [2024-12-14 17:32:11.609724] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.193 [2024-12-14 17:32:11.620287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.193 qpair failed and we were unable to recover it. 00:29:15.193 [2024-12-14 17:32:11.629821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.193 [2024-12-14 17:32:11.629861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.193 [2024-12-14 17:32:11.629877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.193 [2024-12-14 17:32:11.629886] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.193 [2024-12-14 17:32:11.629895] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.193 [2024-12-14 17:32:11.640386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.193 qpair failed and we were unable to recover it. 00:29:15.193 [2024-12-14 17:32:11.649835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.193 [2024-12-14 17:32:11.649873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.193 [2024-12-14 17:32:11.649890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.193 [2024-12-14 17:32:11.649899] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.193 [2024-12-14 17:32:11.649907] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.193 [2024-12-14 17:32:11.660495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.193 qpair failed and we were unable to recover it. 00:29:15.194 [2024-12-14 17:32:11.669953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.194 [2024-12-14 17:32:11.669999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.194 [2024-12-14 17:32:11.670016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.194 [2024-12-14 17:32:11.670025] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.194 [2024-12-14 17:32:11.670033] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.194 [2024-12-14 17:32:11.680562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.194 qpair failed and we were unable to recover it. 00:29:15.194 [2024-12-14 17:32:11.690043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.194 [2024-12-14 17:32:11.690083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.194 [2024-12-14 17:32:11.690099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.194 [2024-12-14 17:32:11.690108] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.194 [2024-12-14 17:32:11.690116] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.194 [2024-12-14 17:32:11.700636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.194 qpair failed and we were unable to recover it. 00:29:15.194 [2024-12-14 17:32:11.709960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.194 [2024-12-14 17:32:11.709999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.194 [2024-12-14 17:32:11.710015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.194 [2024-12-14 17:32:11.710024] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.194 [2024-12-14 17:32:11.710032] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.194 [2024-12-14 17:32:11.720531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.194 qpair failed and we were unable to recover it. 00:29:15.194 [2024-12-14 17:32:11.730090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.194 [2024-12-14 17:32:11.730132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.194 [2024-12-14 17:32:11.730149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.194 [2024-12-14 17:32:11.730158] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.194 [2024-12-14 17:32:11.730167] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.194 [2024-12-14 17:32:11.740633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.194 qpair failed and we were unable to recover it. 00:29:15.194 [2024-12-14 17:32:11.750204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.194 [2024-12-14 17:32:11.750238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.194 [2024-12-14 17:32:11.750257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.194 [2024-12-14 17:32:11.750267] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.194 [2024-12-14 17:32:11.750277] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.194 [2024-12-14 17:32:11.760641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.194 qpair failed and we were unable to recover it. 00:29:15.194 [2024-12-14 17:32:11.770107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.194 [2024-12-14 17:32:11.770148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.194 [2024-12-14 17:32:11.770167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.194 [2024-12-14 17:32:11.770176] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.194 [2024-12-14 17:32:11.770185] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.194 [2024-12-14 17:32:11.780704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.194 qpair failed and we were unable to recover it. 00:29:15.194 [2024-12-14 17:32:11.790333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.194 [2024-12-14 17:32:11.790377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.194 [2024-12-14 17:32:11.790396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.194 [2024-12-14 17:32:11.790405] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.194 [2024-12-14 17:32:11.790414] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.194 [2024-12-14 17:32:11.800878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.194 qpair failed and we were unable to recover it. 00:29:15.194 [2024-12-14 17:32:11.810293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.194 [2024-12-14 17:32:11.810333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.194 [2024-12-14 17:32:11.810350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.194 [2024-12-14 17:32:11.810359] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.194 [2024-12-14 17:32:11.810367] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.194 [2024-12-14 17:32:11.820885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.194 qpair failed and we were unable to recover it. 00:29:15.194 [2024-12-14 17:32:11.830383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.194 [2024-12-14 17:32:11.830418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.194 [2024-12-14 17:32:11.830434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.194 [2024-12-14 17:32:11.830444] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.194 [2024-12-14 17:32:11.830452] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.194 [2024-12-14 17:32:11.841028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.194 qpair failed and we were unable to recover it. 00:29:15.194 [2024-12-14 17:32:11.850428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.194 [2024-12-14 17:32:11.850471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.194 [2024-12-14 17:32:11.850487] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.194 [2024-12-14 17:32:11.850501] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.194 [2024-12-14 17:32:11.850510] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.194 [2024-12-14 17:32:11.860980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.194 qpair failed and we were unable to recover it. 00:29:15.194 [2024-12-14 17:32:11.870533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.194 [2024-12-14 17:32:11.870580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.194 [2024-12-14 17:32:11.870596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.194 [2024-12-14 17:32:11.870605] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.194 [2024-12-14 17:32:11.870614] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.455 [2024-12-14 17:32:11.881040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-12-14 17:32:11.890549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-12-14 17:32:11.890589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-12-14 17:32:11.890605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-12-14 17:32:11.890613] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-12-14 17:32:11.890622] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.455 [2024-12-14 17:32:11.901121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-12-14 17:32:11.910676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-12-14 17:32:11.910717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-12-14 17:32:11.910733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-12-14 17:32:11.910742] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-12-14 17:32:11.910750] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.455 [2024-12-14 17:32:11.921016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-12-14 17:32:11.930619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-12-14 17:32:11.930661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-12-14 17:32:11.930677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-12-14 17:32:11.930686] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-12-14 17:32:11.930694] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.455 [2024-12-14 17:32:11.941314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-12-14 17:32:11.950779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-12-14 17:32:11.950817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-12-14 17:32:11.950833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-12-14 17:32:11.950841] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-12-14 17:32:11.950850] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.455 [2024-12-14 17:32:11.961367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-12-14 17:32:11.970802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-12-14 17:32:11.970844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-12-14 17:32:11.970861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-12-14 17:32:11.970869] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-12-14 17:32:11.970878] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.455 [2024-12-14 17:32:11.981318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-12-14 17:32:11.990883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-12-14 17:32:11.990917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-12-14 17:32:11.990934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-12-14 17:32:11.990943] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-12-14 17:32:11.990952] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.455 [2024-12-14 17:32:12.001544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-12-14 17:32:12.010892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-12-14 17:32:12.010928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-12-14 17:32:12.010944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-12-14 17:32:12.010953] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-12-14 17:32:12.010962] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.455 [2024-12-14 17:32:12.021531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-12-14 17:32:12.030881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-12-14 17:32:12.030921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-12-14 17:32:12.030937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-12-14 17:32:12.030946] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-12-14 17:32:12.030955] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.455 [2024-12-14 17:32:12.041464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-12-14 17:32:12.051046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-12-14 17:32:12.051087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-12-14 17:32:12.051102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-12-14 17:32:12.051111] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-12-14 17:32:12.051123] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.455 [2024-12-14 17:32:12.061622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-12-14 17:32:12.071287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-12-14 17:32:12.071324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-12-14 17:32:12.071340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-12-14 17:32:12.071349] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-12-14 17:32:12.071358] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.456 [2024-12-14 17:32:12.081887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-12-14 17:32:12.091287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-12-14 17:32:12.091327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-12-14 17:32:12.091343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-12-14 17:32:12.091351] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-12-14 17:32:12.091359] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.456 [2024-12-14 17:32:12.101767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-12-14 17:32:12.111336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-12-14 17:32:12.111376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-12-14 17:32:12.111392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-12-14 17:32:12.111401] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-12-14 17:32:12.111409] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.456 [2024-12-14 17:32:12.122053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-12-14 17:32:12.131420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-12-14 17:32:12.131466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-12-14 17:32:12.131482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-12-14 17:32:12.131491] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-12-14 17:32:12.131504] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.715 [2024-12-14 17:32:12.141942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-12-14 17:32:12.151355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.715 [2024-12-14 17:32:12.151398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.715 [2024-12-14 17:32:12.151414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.715 [2024-12-14 17:32:12.151423] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.715 [2024-12-14 17:32:12.151431] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.715 [2024-12-14 17:32:12.162002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-12-14 17:32:12.171599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.715 [2024-12-14 17:32:12.171636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.715 [2024-12-14 17:32:12.171651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.715 [2024-12-14 17:32:12.171660] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.715 [2024-12-14 17:32:12.171669] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.715 [2024-12-14 17:32:12.182247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-12-14 17:32:12.191564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.715 [2024-12-14 17:32:12.191605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.715 [2024-12-14 17:32:12.191621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.715 [2024-12-14 17:32:12.191630] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.715 [2024-12-14 17:32:12.191638] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.715 [2024-12-14 17:32:12.202297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-12-14 17:32:12.211763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.715 [2024-12-14 17:32:12.211801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.715 [2024-12-14 17:32:12.211817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.715 [2024-12-14 17:32:12.211826] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.716 [2024-12-14 17:32:12.211834] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.716 [2024-12-14 17:32:12.222173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-12-14 17:32:12.231836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.716 [2024-12-14 17:32:12.231875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.716 [2024-12-14 17:32:12.231893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.716 [2024-12-14 17:32:12.231902] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.716 [2024-12-14 17:32:12.231911] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.716 [2024-12-14 17:32:12.242482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-12-14 17:32:12.251864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.716 [2024-12-14 17:32:12.251903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.716 [2024-12-14 17:32:12.251919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.716 [2024-12-14 17:32:12.251928] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.716 [2024-12-14 17:32:12.251937] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.716 [2024-12-14 17:32:12.262412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-12-14 17:32:12.271837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.716 [2024-12-14 17:32:12.271876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.716 [2024-12-14 17:32:12.271892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.716 [2024-12-14 17:32:12.271901] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.716 [2024-12-14 17:32:12.271910] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.716 [2024-12-14 17:32:12.282582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-12-14 17:32:12.291952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.716 [2024-12-14 17:32:12.291995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.716 [2024-12-14 17:32:12.292011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.716 [2024-12-14 17:32:12.292020] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.716 [2024-12-14 17:32:12.292028] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.716 [2024-12-14 17:32:12.302370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-12-14 17:32:12.312045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.716 [2024-12-14 17:32:12.312089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.716 [2024-12-14 17:32:12.312105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.716 [2024-12-14 17:32:12.312114] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.716 [2024-12-14 17:32:12.312122] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.716 [2024-12-14 17:32:12.322704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-12-14 17:32:12.332154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.716 [2024-12-14 17:32:12.332195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.716 [2024-12-14 17:32:12.332211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.716 [2024-12-14 17:32:12.332220] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.716 [2024-12-14 17:32:12.332228] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.716 [2024-12-14 17:32:12.342639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-12-14 17:32:12.352210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.716 [2024-12-14 17:32:12.352249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.716 [2024-12-14 17:32:12.352265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.716 [2024-12-14 17:32:12.352274] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.716 [2024-12-14 17:32:12.352282] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.716 [2024-12-14 17:32:12.362769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-12-14 17:32:12.372071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.716 [2024-12-14 17:32:12.372112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.716 [2024-12-14 17:32:12.372129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.716 [2024-12-14 17:32:12.372138] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.716 [2024-12-14 17:32:12.372146] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.716 [2024-12-14 17:32:12.382951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-12-14 17:32:12.392337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.716 [2024-12-14 17:32:12.392373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.716 [2024-12-14 17:32:12.392389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.716 [2024-12-14 17:32:12.392398] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.716 [2024-12-14 17:32:12.392406] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.976 [2024-12-14 17:32:12.402953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-12-14 17:32:12.412214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-12-14 17:32:12.412255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-12-14 17:32:12.412277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-12-14 17:32:12.412286] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-12-14 17:32:12.412295] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.976 [2024-12-14 17:32:12.423018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-12-14 17:32:12.432449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-12-14 17:32:12.432489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-12-14 17:32:12.432509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-12-14 17:32:12.432519] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-12-14 17:32:12.432527] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.976 [2024-12-14 17:32:12.443072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-12-14 17:32:12.452424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-12-14 17:32:12.452465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-12-14 17:32:12.452481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-12-14 17:32:12.452489] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-12-14 17:32:12.452502] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.976 [2024-12-14 17:32:12.463088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-12-14 17:32:12.472377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-12-14 17:32:12.472421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-12-14 17:32:12.472437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-12-14 17:32:12.472445] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-12-14 17:32:12.472454] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.976 [2024-12-14 17:32:12.483078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-12-14 17:32:12.492657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-12-14 17:32:12.492702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-12-14 17:32:12.492717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.977 [2024-12-14 17:32:12.492726] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.977 [2024-12-14 17:32:12.492738] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.977 [2024-12-14 17:32:12.503026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.977 qpair failed and we were unable to recover it. 00:29:15.977 [2024-12-14 17:32:12.512476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.977 [2024-12-14 17:32:12.512520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.977 [2024-12-14 17:32:12.512536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.977 [2024-12-14 17:32:12.512545] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.977 [2024-12-14 17:32:12.512554] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.977 [2024-12-14 17:32:12.523346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.977 qpair failed and we were unable to recover it. 00:29:15.977 [2024-12-14 17:32:12.532772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.977 [2024-12-14 17:32:12.532814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.977 [2024-12-14 17:32:12.532830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.977 [2024-12-14 17:32:12.532839] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.977 [2024-12-14 17:32:12.532847] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.977 [2024-12-14 17:32:12.543279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.977 qpair failed and we were unable to recover it. 00:29:15.977 [2024-12-14 17:32:12.552913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.977 [2024-12-14 17:32:12.552954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.977 [2024-12-14 17:32:12.552970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.977 [2024-12-14 17:32:12.552979] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.977 [2024-12-14 17:32:12.552988] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.977 [2024-12-14 17:32:12.563248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.977 qpair failed and we were unable to recover it. 00:29:15.977 [2024-12-14 17:32:12.572712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.977 [2024-12-14 17:32:12.572749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.977 [2024-12-14 17:32:12.572765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.977 [2024-12-14 17:32:12.572774] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.977 [2024-12-14 17:32:12.572782] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.977 [2024-12-14 17:32:12.583436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.977 qpair failed and we were unable to recover it. 00:29:15.977 [2024-12-14 17:32:12.592956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.977 [2024-12-14 17:32:12.592998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.977 [2024-12-14 17:32:12.593014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.977 [2024-12-14 17:32:12.593024] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.977 [2024-12-14 17:32:12.593032] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.977 [2024-12-14 17:32:12.603456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.977 qpair failed and we were unable to recover it. 00:29:15.977 [2024-12-14 17:32:12.612879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.977 [2024-12-14 17:32:12.612922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.977 [2024-12-14 17:32:12.612938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.977 [2024-12-14 17:32:12.612947] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.977 [2024-12-14 17:32:12.612955] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.977 [2024-12-14 17:32:12.623326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.977 qpair failed and we were unable to recover it. 00:29:15.977 [2024-12-14 17:32:12.633028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.977 [2024-12-14 17:32:12.633069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.977 [2024-12-14 17:32:12.633085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.977 [2024-12-14 17:32:12.633094] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.977 [2024-12-14 17:32:12.633103] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.977 [2024-12-14 17:32:12.643325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.977 qpair failed and we were unable to recover it. 00:29:15.977 [2024-12-14 17:32:12.653133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.977 [2024-12-14 17:32:12.653170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.977 [2024-12-14 17:32:12.653187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.977 [2024-12-14 17:32:12.653196] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.977 [2024-12-14 17:32:12.653204] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.663676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.673049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.673090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-12-14 17:32:12.673107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-12-14 17:32:12.673120] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-12-14 17:32:12.673129] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.683751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.693225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.693265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-12-14 17:32:12.693282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-12-14 17:32:12.693290] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-12-14 17:32:12.693299] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.703632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.713251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.713294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-12-14 17:32:12.713312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-12-14 17:32:12.713321] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-12-14 17:32:12.713330] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.723974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.733283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.733330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-12-14 17:32:12.733346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-12-14 17:32:12.733355] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-12-14 17:32:12.733364] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.743955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.753356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.753395] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-12-14 17:32:12.753412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-12-14 17:32:12.753421] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-12-14 17:32:12.753429] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.764194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.773385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.773427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-12-14 17:32:12.773443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-12-14 17:32:12.773452] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-12-14 17:32:12.773461] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.784051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.793428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.793469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-12-14 17:32:12.793485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-12-14 17:32:12.793494] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-12-14 17:32:12.793508] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.803934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.813481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.813521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-12-14 17:32:12.813537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-12-14 17:32:12.813546] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-12-14 17:32:12.813554] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.824252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.833662] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.833699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-12-14 17:32:12.833715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-12-14 17:32:12.833724] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-12-14 17:32:12.833732] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.844116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.853672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.853713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-12-14 17:32:12.853732] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-12-14 17:32:12.853741] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-12-14 17:32:12.853749] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.864239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.873591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.873628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-12-14 17:32:12.873644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-12-14 17:32:12.873652] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-12-14 17:32:12.873661] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.884400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.893742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.893781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-12-14 17:32:12.893796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-12-14 17:32:12.893805] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-12-14 17:32:12.893814] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.238 [2024-12-14 17:32:12.904340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-12-14 17:32:12.913834] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-12-14 17:32:12.913873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.239 [2024-12-14 17:32:12.913889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.239 [2024-12-14 17:32:12.913898] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.239 [2024-12-14 17:32:12.913906] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.499 [2024-12-14 17:32:12.924560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.499 qpair failed and we were unable to recover it. 00:29:16.499 [2024-12-14 17:32:12.933815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.499 [2024-12-14 17:32:12.933855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.499 [2024-12-14 17:32:12.933872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.499 [2024-12-14 17:32:12.933881] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.499 [2024-12-14 17:32:12.933892] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.499 [2024-12-14 17:32:12.944653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.499 qpair failed and we were unable to recover it. 00:29:16.499 [2024-12-14 17:32:12.953946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.499 [2024-12-14 17:32:12.953988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.499 [2024-12-14 17:32:12.954004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.499 [2024-12-14 17:32:12.954013] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.499 [2024-12-14 17:32:12.954021] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.499 [2024-12-14 17:32:12.964599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.499 qpair failed and we were unable to recover it. 00:29:16.499 [2024-12-14 17:32:12.973916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.499 [2024-12-14 17:32:12.973957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.499 [2024-12-14 17:32:12.973974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.499 [2024-12-14 17:32:12.973982] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.499 [2024-12-14 17:32:12.973991] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.499 [2024-12-14 17:32:12.984554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.499 qpair failed and we were unable to recover it. 00:29:16.499 [2024-12-14 17:32:12.994071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.499 [2024-12-14 17:32:12.994115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.499 [2024-12-14 17:32:12.994135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.499 [2024-12-14 17:32:12.994144] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.499 [2024-12-14 17:32:12.994153] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.499 [2024-12-14 17:32:13.004629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.499 qpair failed and we were unable to recover it. 00:29:16.499 [2024-12-14 17:32:13.013976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.499 [2024-12-14 17:32:13.014019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.499 [2024-12-14 17:32:13.014035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.499 [2024-12-14 17:32:13.014045] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.499 [2024-12-14 17:32:13.014053] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.499 [2024-12-14 17:32:13.024824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.499 qpair failed and we were unable to recover it. 00:29:16.499 [2024-12-14 17:32:13.034118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.499 [2024-12-14 17:32:13.034152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.499 [2024-12-14 17:32:13.034169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.499 [2024-12-14 17:32:13.034178] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.499 [2024-12-14 17:32:13.034186] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.499 [2024-12-14 17:32:13.044883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.499 qpair failed and we were unable to recover it. 00:29:16.499 [2024-12-14 17:32:13.054144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.499 [2024-12-14 17:32:13.054183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.499 [2024-12-14 17:32:13.054200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.499 [2024-12-14 17:32:13.054209] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.499 [2024-12-14 17:32:13.054217] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.499 [2024-12-14 17:32:13.064782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.499 qpair failed and we were unable to recover it. 00:29:16.499 [2024-12-14 17:32:13.074209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.499 [2024-12-14 17:32:13.074250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.499 [2024-12-14 17:32:13.074267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.499 [2024-12-14 17:32:13.074276] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.499 [2024-12-14 17:32:13.074284] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.499 [2024-12-14 17:32:13.084806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.499 qpair failed and we were unable to recover it. 00:29:16.499 [2024-12-14 17:32:13.094224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.499 [2024-12-14 17:32:13.094266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.499 [2024-12-14 17:32:13.094283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.499 [2024-12-14 17:32:13.094292] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.499 [2024-12-14 17:32:13.094301] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.499 [2024-12-14 17:32:13.104904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.499 qpair failed and we were unable to recover it. 00:29:16.499 [2024-12-14 17:32:13.114259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.499 [2024-12-14 17:32:13.114305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.499 [2024-12-14 17:32:13.114321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.499 [2024-12-14 17:32:13.114333] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.499 [2024-12-14 17:32:13.114342] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.499 [2024-12-14 17:32:13.125162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.499 qpair failed and we were unable to recover it. 00:29:16.499 [2024-12-14 17:32:13.134284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.499 [2024-12-14 17:32:13.134323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.499 [2024-12-14 17:32:13.134340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.499 [2024-12-14 17:32:13.134348] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.499 [2024-12-14 17:32:13.134357] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.499 [2024-12-14 17:32:13.144892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.499 qpair failed and we were unable to recover it. 00:29:16.499 [2024-12-14 17:32:13.154322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.500 [2024-12-14 17:32:13.154361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.500 [2024-12-14 17:32:13.154377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.500 [2024-12-14 17:32:13.154385] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.500 [2024-12-14 17:32:13.154394] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.500 [2024-12-14 17:32:13.164941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.500 qpair failed and we were unable to recover it. 00:29:16.500 [2024-12-14 17:32:13.174369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.500 [2024-12-14 17:32:13.174417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.500 [2024-12-14 17:32:13.174433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.500 [2024-12-14 17:32:13.174442] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.500 [2024-12-14 17:32:13.174450] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.760 [2024-12-14 17:32:13.184926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.760 qpair failed and we were unable to recover it. 00:29:16.760 [2024-12-14 17:32:13.194455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.760 [2024-12-14 17:32:13.194493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.760 [2024-12-14 17:32:13.194515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.760 [2024-12-14 17:32:13.194524] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.760 [2024-12-14 17:32:13.194533] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.760 [2024-12-14 17:32:13.205079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.760 qpair failed and we were unable to recover it. 00:29:16.760 [2024-12-14 17:32:13.214475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.760 [2024-12-14 17:32:13.214514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.760 [2024-12-14 17:32:13.214530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.760 [2024-12-14 17:32:13.214539] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.760 [2024-12-14 17:32:13.214548] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.760 [2024-12-14 17:32:13.224894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.760 qpair failed and we were unable to recover it. 00:29:16.760 [2024-12-14 17:32:13.234650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.760 [2024-12-14 17:32:13.234692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.760 [2024-12-14 17:32:13.234709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.760 [2024-12-14 17:32:13.234718] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.760 [2024-12-14 17:32:13.234726] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.760 [2024-12-14 17:32:13.245267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.760 qpair failed and we were unable to recover it. 00:29:16.760 [2024-12-14 17:32:13.254692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.760 [2024-12-14 17:32:13.254736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.760 [2024-12-14 17:32:13.254752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.760 [2024-12-14 17:32:13.254761] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.760 [2024-12-14 17:32:13.254770] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.760 [2024-12-14 17:32:13.265187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.760 qpair failed and we were unable to recover it. 00:29:16.760 [2024-12-14 17:32:13.274689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.760 [2024-12-14 17:32:13.274724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.760 [2024-12-14 17:32:13.274740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.760 [2024-12-14 17:32:13.274749] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.760 [2024-12-14 17:32:13.274758] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.760 [2024-12-14 17:32:13.285379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.760 qpair failed and we were unable to recover it. 00:29:16.760 [2024-12-14 17:32:13.294802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.760 [2024-12-14 17:32:13.294844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.760 [2024-12-14 17:32:13.294863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.760 [2024-12-14 17:32:13.294872] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.760 [2024-12-14 17:32:13.294881] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.760 [2024-12-14 17:32:13.305248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.760 qpair failed and we were unable to recover it. 00:29:16.760 [2024-12-14 17:32:13.314804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.760 [2024-12-14 17:32:13.314846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.760 [2024-12-14 17:32:13.314863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.760 [2024-12-14 17:32:13.314872] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.760 [2024-12-14 17:32:13.314880] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.760 [2024-12-14 17:32:13.325377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.760 qpair failed and we were unable to recover it. 00:29:16.760 [2024-12-14 17:32:13.335014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.760 [2024-12-14 17:32:13.335052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.760 [2024-12-14 17:32:13.335068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.760 [2024-12-14 17:32:13.335077] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.760 [2024-12-14 17:32:13.335086] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.760 [2024-12-14 17:32:13.345296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.760 qpair failed and we were unable to recover it. 00:29:16.760 [2024-12-14 17:32:13.354923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.760 [2024-12-14 17:32:13.354968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.760 [2024-12-14 17:32:13.354984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.760 [2024-12-14 17:32:13.354992] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.760 [2024-12-14 17:32:13.355001] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.760 [2024-12-14 17:32:13.365370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.760 qpair failed and we were unable to recover it. 00:29:16.760 [2024-12-14 17:32:13.374991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.760 [2024-12-14 17:32:13.375033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.760 [2024-12-14 17:32:13.375050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.760 [2024-12-14 17:32:13.375059] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.760 [2024-12-14 17:32:13.375070] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.760 [2024-12-14 17:32:13.385635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.760 qpair failed and we were unable to recover it. 00:29:16.760 [2024-12-14 17:32:13.395134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.760 [2024-12-14 17:32:13.395174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-12-14 17:32:13.395191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-12-14 17:32:13.395199] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-12-14 17:32:13.395208] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.761 [2024-12-14 17:32:13.405539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-12-14 17:32:13.415160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-12-14 17:32:13.415199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-12-14 17:32:13.415215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-12-14 17:32:13.415224] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-12-14 17:32:13.415232] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.761 [2024-12-14 17:32:13.425700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-12-14 17:32:13.435327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-12-14 17:32:13.435365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-12-14 17:32:13.435381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-12-14 17:32:13.435390] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-12-14 17:32:13.435398] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.021 [2024-12-14 17:32:13.445744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.021 qpair failed and we were unable to recover it. 00:29:17.021 [2024-12-14 17:32:13.455246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.021 [2024-12-14 17:32:13.455287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.021 [2024-12-14 17:32:13.455303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.021 [2024-12-14 17:32:13.455312] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.021 [2024-12-14 17:32:13.455321] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.021 [2024-12-14 17:32:13.465812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.021 qpair failed and we were unable to recover it. 00:29:17.021 [2024-12-14 17:32:13.475407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.021 [2024-12-14 17:32:13.475449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.021 [2024-12-14 17:32:13.475464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.021 [2024-12-14 17:32:13.475473] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.021 [2024-12-14 17:32:13.475482] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.021 [2024-12-14 17:32:13.485947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.021 qpair failed and we were unable to recover it. 00:29:17.021 [2024-12-14 17:32:13.495318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.021 [2024-12-14 17:32:13.495362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.021 [2024-12-14 17:32:13.495378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.021 [2024-12-14 17:32:13.495387] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.021 [2024-12-14 17:32:13.495395] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.021 [2024-12-14 17:32:13.505944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.021 qpair failed and we were unable to recover it. 00:29:17.021 [2024-12-14 17:32:13.515315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.021 [2024-12-14 17:32:13.515356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.021 [2024-12-14 17:32:13.515372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.021 [2024-12-14 17:32:13.515381] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.021 [2024-12-14 17:32:13.515389] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.021 [2024-12-14 17:32:13.526064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.021 qpair failed and we were unable to recover it. 00:29:17.021 [2024-12-14 17:32:13.535580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.021 [2024-12-14 17:32:13.535616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.021 [2024-12-14 17:32:13.535631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.021 [2024-12-14 17:32:13.535640] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.021 [2024-12-14 17:32:13.535649] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.021 [2024-12-14 17:32:13.546235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.021 qpair failed and we were unable to recover it. 00:29:17.021 [2024-12-14 17:32:13.555574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.021 [2024-12-14 17:32:13.555615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.021 [2024-12-14 17:32:13.555631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.021 [2024-12-14 17:32:13.555643] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.021 [2024-12-14 17:32:13.555652] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.021 [2024-12-14 17:32:13.566243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.021 qpair failed and we were unable to recover it. 00:29:17.021 [2024-12-14 17:32:13.575661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.021 [2024-12-14 17:32:13.575702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.021 [2024-12-14 17:32:13.575719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.021 [2024-12-14 17:32:13.575728] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.021 [2024-12-14 17:32:13.575736] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.021 [2024-12-14 17:32:13.586386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.021 qpair failed and we were unable to recover it. 00:29:17.021 [2024-12-14 17:32:13.595767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.021 [2024-12-14 17:32:13.595803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.021 [2024-12-14 17:32:13.595819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.021 [2024-12-14 17:32:13.595828] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.021 [2024-12-14 17:32:13.595836] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.021 [2024-12-14 17:32:13.606278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.021 qpair failed and we were unable to recover it. 00:29:17.021 [2024-12-14 17:32:13.615746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.021 [2024-12-14 17:32:13.615789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.021 [2024-12-14 17:32:13.615805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.021 [2024-12-14 17:32:13.615814] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.021 [2024-12-14 17:32:13.615823] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.021 [2024-12-14 17:32:13.626196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.021 qpair failed and we were unable to recover it. 00:29:17.021 [2024-12-14 17:32:13.635914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.021 [2024-12-14 17:32:13.635951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.021 [2024-12-14 17:32:13.635967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.021 [2024-12-14 17:32:13.635976] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.021 [2024-12-14 17:32:13.635984] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.021 [2024-12-14 17:32:13.646522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.021 qpair failed and we were unable to recover it. 00:29:17.021 [2024-12-14 17:32:13.655928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.021 [2024-12-14 17:32:13.655970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.021 [2024-12-14 17:32:13.655986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.021 [2024-12-14 17:32:13.655995] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.021 [2024-12-14 17:32:13.656003] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.021 [2024-12-14 17:32:13.666420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.021 qpair failed and we were unable to recover it. 00:29:17.021 [2024-12-14 17:32:13.675938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.021 [2024-12-14 17:32:13.675973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.021 [2024-12-14 17:32:13.675989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.022 [2024-12-14 17:32:13.675998] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.022 [2024-12-14 17:32:13.676007] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.022 [2024-12-14 17:32:13.686715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.022 qpair failed and we were unable to recover it. 00:29:17.022 [2024-12-14 17:32:13.696031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.022 [2024-12-14 17:32:13.696071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.022 [2024-12-14 17:32:13.696088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.022 [2024-12-14 17:32:13.696097] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.022 [2024-12-14 17:32:13.696105] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.706488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.715953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.715993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.716009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.282 [2024-12-14 17:32:13.716018] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.282 [2024-12-14 17:32:13.716027] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.726666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.736059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.736098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.736118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.282 [2024-12-14 17:32:13.736128] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.282 [2024-12-14 17:32:13.736136] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.746471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.756164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.756207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.756223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.282 [2024-12-14 17:32:13.756232] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.282 [2024-12-14 17:32:13.756240] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.766765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.776135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.776169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.776186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.282 [2024-12-14 17:32:13.776195] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.282 [2024-12-14 17:32:13.776203] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.786825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.796232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.796272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.796289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.282 [2024-12-14 17:32:13.796298] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.282 [2024-12-14 17:32:13.796306] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.806865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.816374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.816414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.816430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.282 [2024-12-14 17:32:13.816439] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.282 [2024-12-14 17:32:13.816448] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.826926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.836586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.836629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.836646] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.282 [2024-12-14 17:32:13.836655] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.282 [2024-12-14 17:32:13.836663] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.847059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.856518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.856554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.856570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.282 [2024-12-14 17:32:13.856579] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.282 [2024-12-14 17:32:13.856587] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.866956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.876620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.876657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.876674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.282 [2024-12-14 17:32:13.876683] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.282 [2024-12-14 17:32:13.876691] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.886989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.896651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.896688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.896705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.282 [2024-12-14 17:32:13.896714] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.282 [2024-12-14 17:32:13.896722] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.907114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.916686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.916724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.916740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.282 [2024-12-14 17:32:13.916749] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.282 [2024-12-14 17:32:13.916757] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.927233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.936770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.936807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.936823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.282 [2024-12-14 17:32:13.936831] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.282 [2024-12-14 17:32:13.936840] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.282 [2024-12-14 17:32:13.947471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.282 qpair failed and we were unable to recover it. 00:29:17.282 [2024-12-14 17:32:13.956835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.282 [2024-12-14 17:32:13.956873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.282 [2024-12-14 17:32:13.956889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.283 [2024-12-14 17:32:13.956898] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.283 [2024-12-14 17:32:13.956907] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:13.967345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:13.976841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:13.976885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:13.976902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:13.976911] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:13.976919] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:13.987517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:13.996852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:13.996891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:13.996907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:13.996916] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:13.996927] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:14.007555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:14.017009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:14.017050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:14.017066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:14.017075] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:14.017083] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:14.027385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:14.036976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:14.037014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:14.037031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:14.037039] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:14.037048] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:14.047731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:14.057185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:14.057226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:14.057242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:14.057251] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:14.057259] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:14.067491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:14.077176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:14.077213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:14.077229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:14.077238] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:14.077246] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:14.087806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:14.097155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:14.097193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:14.097209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:14.097217] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:14.097226] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:14.107741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:14.117363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:14.117403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:14.117419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:14.117428] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:14.117436] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:14.127828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:14.137240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:14.137284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:14.137299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:14.137308] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:14.137317] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:14.147922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:14.157428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:14.157473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:14.157489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:14.157502] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:14.157510] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:14.168082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:14.177462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:14.177507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:14.177526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:14.177535] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:14.177544] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:14.187864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:14.197598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:14.197638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:14.197654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:14.197663] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:14.197671] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.543 [2024-12-14 17:32:14.207951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-12-14 17:32:14.217606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-12-14 17:32:14.217652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-12-14 17:32:14.217667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-12-14 17:32:14.217676] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-12-14 17:32:14.217685] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.803 [2024-12-14 17:32:14.227873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.803 qpair failed and we were unable to recover it. 00:29:17.803 [2024-12-14 17:32:14.237689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.803 [2024-12-14 17:32:14.237729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.803 [2024-12-14 17:32:14.237745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.803 [2024-12-14 17:32:14.237754] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.803 [2024-12-14 17:32:14.237763] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.803 [2024-12-14 17:32:14.248183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.803 qpair failed and we were unable to recover it. 00:29:17.803 [2024-12-14 17:32:14.257860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.803 [2024-12-14 17:32:14.257899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.803 [2024-12-14 17:32:14.257916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.803 [2024-12-14 17:32:14.257925] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.803 [2024-12-14 17:32:14.257934] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.803 [2024-12-14 17:32:14.268521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.803 qpair failed and we were unable to recover it. 00:29:17.803 [2024-12-14 17:32:14.277712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.803 [2024-12-14 17:32:14.277759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.803 [2024-12-14 17:32:14.277776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.803 [2024-12-14 17:32:14.277785] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.803 [2024-12-14 17:32:14.277793] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.803 [2024-12-14 17:32:14.288444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.803 qpair failed and we were unable to recover it. 00:29:17.803 [2024-12-14 17:32:14.297910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.803 [2024-12-14 17:32:14.297951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.803 [2024-12-14 17:32:14.297967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.803 [2024-12-14 17:32:14.297976] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.803 [2024-12-14 17:32:14.297984] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.803 [2024-12-14 17:32:14.308618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.803 qpair failed and we were unable to recover it. 00:29:17.803 [2024-12-14 17:32:14.318057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.803 [2024-12-14 17:32:14.318100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.803 [2024-12-14 17:32:14.318116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.803 [2024-12-14 17:32:14.318125] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.803 [2024-12-14 17:32:14.318133] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.803 [2024-12-14 17:32:14.328597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.803 qpair failed and we were unable to recover it. 00:29:17.803 [2024-12-14 17:32:14.338133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.803 [2024-12-14 17:32:14.338172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.803 [2024-12-14 17:32:14.338189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.803 [2024-12-14 17:32:14.338198] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.803 [2024-12-14 17:32:14.338206] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.803 [2024-12-14 17:32:14.348697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.803 qpair failed and we were unable to recover it. 00:29:17.803 [2024-12-14 17:32:14.358161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.803 [2024-12-14 17:32:14.358199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.803 [2024-12-14 17:32:14.358218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.803 [2024-12-14 17:32:14.358227] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.803 [2024-12-14 17:32:14.358235] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.803 [2024-12-14 17:32:14.368318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.803 qpair failed and we were unable to recover it. 00:29:17.803 [2024-12-14 17:32:14.378043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.804 [2024-12-14 17:32:14.378081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.804 [2024-12-14 17:32:14.378097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.804 [2024-12-14 17:32:14.378106] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.804 [2024-12-14 17:32:14.378114] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.804 [2024-12-14 17:32:14.388703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.804 qpair failed and we were unable to recover it. 00:29:17.804 [2024-12-14 17:32:14.398192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.804 [2024-12-14 17:32:14.398231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.804 [2024-12-14 17:32:14.398247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.804 [2024-12-14 17:32:14.398256] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.804 [2024-12-14 17:32:14.398264] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.804 [2024-12-14 17:32:14.408795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.804 qpair failed and we were unable to recover it. 00:29:17.804 [2024-12-14 17:32:14.418246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.804 [2024-12-14 17:32:14.418283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.804 [2024-12-14 17:32:14.418299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.804 [2024-12-14 17:32:14.418307] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.804 [2024-12-14 17:32:14.418316] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.804 [2024-12-14 17:32:14.428905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.804 qpair failed and we were unable to recover it. 00:29:17.804 [2024-12-14 17:32:14.438270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.804 [2024-12-14 17:32:14.438308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.804 [2024-12-14 17:32:14.438323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.804 [2024-12-14 17:32:14.438332] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.804 [2024-12-14 17:32:14.438344] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.804 [2024-12-14 17:32:14.449039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.804 qpair failed and we were unable to recover it. 00:29:17.804 [2024-12-14 17:32:14.458351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.804 [2024-12-14 17:32:14.458392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.804 [2024-12-14 17:32:14.458408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.804 [2024-12-14 17:32:14.458417] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.804 [2024-12-14 17:32:14.458425] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.804 [2024-12-14 17:32:14.469006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.804 qpair failed and we were unable to recover it. 00:29:17.804 [2024-12-14 17:32:14.478561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.804 [2024-12-14 17:32:14.478601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.804 [2024-12-14 17:32:14.478617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.804 [2024-12-14 17:32:14.478626] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.804 [2024-12-14 17:32:14.478635] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.064 [2024-12-14 17:32:14.489019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-12-14 17:32:14.498456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.064 [2024-12-14 17:32:14.498490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.064 [2024-12-14 17:32:14.498511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.064 [2024-12-14 17:32:14.498520] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.064 [2024-12-14 17:32:14.498529] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.064 [2024-12-14 17:32:14.509052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-12-14 17:32:14.518595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.064 [2024-12-14 17:32:14.518634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.064 [2024-12-14 17:32:14.518650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.064 [2024-12-14 17:32:14.518660] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.064 [2024-12-14 17:32:14.518668] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.064 [2024-12-14 17:32:14.529133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-12-14 17:32:14.538576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.064 [2024-12-14 17:32:14.538615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.064 [2024-12-14 17:32:14.538631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.064 [2024-12-14 17:32:14.538640] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.064 [2024-12-14 17:32:14.538649] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.064 [2024-12-14 17:32:14.549345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-12-14 17:32:14.558712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.064 [2024-12-14 17:32:14.558753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.064 [2024-12-14 17:32:14.558770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.064 [2024-12-14 17:32:14.558779] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.064 [2024-12-14 17:32:14.558787] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.064 [2024-12-14 17:32:14.569257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-12-14 17:32:14.578671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.064 [2024-12-14 17:32:14.578706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.064 [2024-12-14 17:32:14.578723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.064 [2024-12-14 17:32:14.578731] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.064 [2024-12-14 17:32:14.578740] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.064 [2024-12-14 17:32:14.589231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-12-14 17:32:14.598701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.064 [2024-12-14 17:32:14.598742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.064 [2024-12-14 17:32:14.598758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.064 [2024-12-14 17:32:14.598767] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.064 [2024-12-14 17:32:14.598775] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.064 [2024-12-14 17:32:14.609225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-12-14 17:32:14.618684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.064 [2024-12-14 17:32:14.618727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.064 [2024-12-14 17:32:14.618743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.064 [2024-12-14 17:32:14.618757] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.064 [2024-12-14 17:32:14.618766] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.064 [2024-12-14 17:32:14.629541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-12-14 17:32:14.638909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.064 [2024-12-14 17:32:14.638949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.064 [2024-12-14 17:32:14.638965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.064 [2024-12-14 17:32:14.638974] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.064 [2024-12-14 17:32:14.638982] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.064 [2024-12-14 17:32:14.649393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-12-14 17:32:14.659022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.064 [2024-12-14 17:32:14.659068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.064 [2024-12-14 17:32:14.659083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.064 [2024-12-14 17:32:14.659092] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.064 [2024-12-14 17:32:14.659101] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.064 [2024-12-14 17:32:14.669475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.064 qpair failed and we were unable to recover it. 00:29:18.064 [2024-12-14 17:32:14.678993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.064 [2024-12-14 17:32:14.679033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.064 [2024-12-14 17:32:14.679049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.064 [2024-12-14 17:32:14.679058] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.065 [2024-12-14 17:32:14.679067] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.065 [2024-12-14 17:32:14.689646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-12-14 17:32:14.699084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.065 [2024-12-14 17:32:14.699127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.065 [2024-12-14 17:32:14.699144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.065 [2024-12-14 17:32:14.699153] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.065 [2024-12-14 17:32:14.699162] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.065 [2024-12-14 17:32:14.709551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-12-14 17:32:14.719035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.065 [2024-12-14 17:32:14.719072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.065 [2024-12-14 17:32:14.719090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.065 [2024-12-14 17:32:14.719100] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.065 [2024-12-14 17:32:14.719108] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.065 [2024-12-14 17:32:14.729614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.065 qpair failed and we were unable to recover it. 00:29:18.065 [2024-12-14 17:32:14.739208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.065 [2024-12-14 17:32:14.739242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.065 [2024-12-14 17:32:14.739258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.065 [2024-12-14 17:32:14.739267] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.065 [2024-12-14 17:32:14.739276] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.325 [2024-12-14 17:32:14.749645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.325 qpair failed and we were unable to recover it. 00:29:18.325 [2024-12-14 17:32:14.759237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.325 [2024-12-14 17:32:14.759276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.325 [2024-12-14 17:32:14.759292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.325 [2024-12-14 17:32:14.759302] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.325 [2024-12-14 17:32:14.759311] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.325 [2024-12-14 17:32:14.769507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.325 qpair failed and we were unable to recover it. 00:29:18.325 [2024-12-14 17:32:14.779131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.325 [2024-12-14 17:32:14.779179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.325 [2024-12-14 17:32:14.779196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.325 [2024-12-14 17:32:14.779205] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.325 [2024-12-14 17:32:14.779214] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.325 [2024-12-14 17:32:14.789790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.325 qpair failed and we were unable to recover it. 00:29:18.325 [2024-12-14 17:32:14.799388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.325 [2024-12-14 17:32:14.799430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.325 [2024-12-14 17:32:14.799450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.325 [2024-12-14 17:32:14.799459] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.325 [2024-12-14 17:32:14.799467] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.325 [2024-12-14 17:32:14.809754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.325 qpair failed and we were unable to recover it. 00:29:18.325 [2024-12-14 17:32:14.819282] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.325 [2024-12-14 17:32:14.819322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.325 [2024-12-14 17:32:14.819338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.325 [2024-12-14 17:32:14.819347] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.325 [2024-12-14 17:32:14.819356] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.325 [2024-12-14 17:32:14.829984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.325 qpair failed and we were unable to recover it. 00:29:18.326 [2024-12-14 17:32:14.839396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.326 [2024-12-14 17:32:14.839433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.326 [2024-12-14 17:32:14.839450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.326 [2024-12-14 17:32:14.839459] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.326 [2024-12-14 17:32:14.839467] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.326 [2024-12-14 17:32:14.850017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.326 qpair failed and we were unable to recover it. 00:29:18.326 [2024-12-14 17:32:14.859544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.326 [2024-12-14 17:32:14.859587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.326 [2024-12-14 17:32:14.859603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.326 [2024-12-14 17:32:14.859613] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.326 [2024-12-14 17:32:14.859621] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.326 [2024-12-14 17:32:14.869904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.326 qpair failed and we were unable to recover it. 00:29:18.326 [2024-12-14 17:32:14.879513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.326 [2024-12-14 17:32:14.879550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.326 [2024-12-14 17:32:14.879567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.326 [2024-12-14 17:32:14.879576] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.326 [2024-12-14 17:32:14.879588] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.326 [2024-12-14 17:32:14.889925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.326 qpair failed and we were unable to recover it. 00:29:18.326 [2024-12-14 17:32:14.899638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.326 [2024-12-14 17:32:14.899677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.326 [2024-12-14 17:32:14.899694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.326 [2024-12-14 17:32:14.899702] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.326 [2024-12-14 17:32:14.899711] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.326 [2024-12-14 17:32:14.910133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.326 qpair failed and we were unable to recover it. 00:29:18.326 [2024-12-14 17:32:14.919617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.326 [2024-12-14 17:32:14.919656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.326 [2024-12-14 17:32:14.919672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.326 [2024-12-14 17:32:14.919682] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.326 [2024-12-14 17:32:14.919690] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.326 [2024-12-14 17:32:14.930183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.326 qpair failed and we were unable to recover it. 00:29:18.326 [2024-12-14 17:32:14.939609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.326 [2024-12-14 17:32:14.939654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.326 [2024-12-14 17:32:14.939670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.326 [2024-12-14 17:32:14.939680] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.326 [2024-12-14 17:32:14.939689] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.326 [2024-12-14 17:32:14.950016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.326 qpair failed and we were unable to recover it. 00:29:18.326 [2024-12-14 17:32:14.959785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.326 [2024-12-14 17:32:14.959825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.326 [2024-12-14 17:32:14.959841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.326 [2024-12-14 17:32:14.959849] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.326 [2024-12-14 17:32:14.959858] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.326 [2024-12-14 17:32:14.970183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.326 qpair failed and we were unable to recover it. 00:29:18.326 [2024-12-14 17:32:14.979692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.326 [2024-12-14 17:32:14.979733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.326 [2024-12-14 17:32:14.979750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.326 [2024-12-14 17:32:14.979759] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.326 [2024-12-14 17:32:14.979768] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.326 [2024-12-14 17:32:14.990446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.326 qpair failed and we were unable to recover it. 00:29:18.326 [2024-12-14 17:32:14.999828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.326 [2024-12-14 17:32:14.999868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.326 [2024-12-14 17:32:14.999884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.326 [2024-12-14 17:32:14.999893] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.326 [2024-12-14 17:32:14.999901] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.586 [2024-12-14 17:32:15.010593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.586 qpair failed and we were unable to recover it. 00:29:18.586 [2024-12-14 17:32:15.019793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.586 [2024-12-14 17:32:15.019832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.586 [2024-12-14 17:32:15.019848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.586 [2024-12-14 17:32:15.019857] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.586 [2024-12-14 17:32:15.019866] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.586 [2024-12-14 17:32:15.030490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.586 qpair failed and we were unable to recover it. 00:29:18.586 [2024-12-14 17:32:15.039985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.586 [2024-12-14 17:32:15.040022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.586 [2024-12-14 17:32:15.040038] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.586 [2024-12-14 17:32:15.040047] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.586 [2024-12-14 17:32:15.040055] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.586 [2024-12-14 17:32:15.050604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.586 qpair failed and we were unable to recover it. 00:29:18.586 [2024-12-14 17:32:15.059960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.586 [2024-12-14 17:32:15.059996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.586 [2024-12-14 17:32:15.060012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.586 [2024-12-14 17:32:15.060024] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.586 [2024-12-14 17:32:15.060033] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.586 [2024-12-14 17:32:15.070453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.586 qpair failed and we were unable to recover it. 00:29:18.586 [2024-12-14 17:32:15.080063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.586 [2024-12-14 17:32:15.080101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.586 [2024-12-14 17:32:15.080118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.586 [2024-12-14 17:32:15.080127] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.586 [2024-12-14 17:32:15.080136] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.587 [2024-12-14 17:32:15.090718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.587 qpair failed and we were unable to recover it. 00:29:18.587 [2024-12-14 17:32:15.100021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.587 [2024-12-14 17:32:15.100066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.587 [2024-12-14 17:32:15.100082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.587 [2024-12-14 17:32:15.100091] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.587 [2024-12-14 17:32:15.100100] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.587 [2024-12-14 17:32:15.110690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.587 qpair failed and we were unable to recover it. 00:29:18.587 [2024-12-14 17:32:15.120208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.587 [2024-12-14 17:32:15.120248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.587 [2024-12-14 17:32:15.120264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.587 [2024-12-14 17:32:15.120273] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.587 [2024-12-14 17:32:15.120281] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.587 [2024-12-14 17:32:15.130706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.587 qpair failed and we were unable to recover it. 00:29:18.587 [2024-12-14 17:32:15.140212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.587 [2024-12-14 17:32:15.140249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.587 [2024-12-14 17:32:15.140265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.587 [2024-12-14 17:32:15.140273] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.587 [2024-12-14 17:32:15.140282] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.587 [2024-12-14 17:32:15.150893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.587 qpair failed and we were unable to recover it. 00:29:18.587 [2024-12-14 17:32:15.160356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.587 [2024-12-14 17:32:15.160393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.587 [2024-12-14 17:32:15.160409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.587 [2024-12-14 17:32:15.160418] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.587 [2024-12-14 17:32:15.160426] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.587 [2024-12-14 17:32:15.170817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.587 qpair failed and we were unable to recover it. 00:29:18.587 [2024-12-14 17:32:15.180301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.587 [2024-12-14 17:32:15.180340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.587 [2024-12-14 17:32:15.180356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.587 [2024-12-14 17:32:15.180365] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.587 [2024-12-14 17:32:15.180374] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.587 [2024-12-14 17:32:15.190993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.587 qpair failed and we were unable to recover it. 00:29:18.587 [2024-12-14 17:32:15.200405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.587 [2024-12-14 17:32:15.200442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.587 [2024-12-14 17:32:15.200457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.587 [2024-12-14 17:32:15.200467] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.587 [2024-12-14 17:32:15.200475] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.587 [2024-12-14 17:32:15.211036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.587 qpair failed and we were unable to recover it. 00:29:18.587 [2024-12-14 17:32:15.220313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.587 [2024-12-14 17:32:15.220352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.587 [2024-12-14 17:32:15.220368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.587 [2024-12-14 17:32:15.220377] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.587 [2024-12-14 17:32:15.220385] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.587 [2024-12-14 17:32:15.231092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.587 qpair failed and we were unable to recover it. 00:29:18.587 [2024-12-14 17:32:15.240470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.587 [2024-12-14 17:32:15.240513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.587 [2024-12-14 17:32:15.240535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.587 [2024-12-14 17:32:15.240545] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.587 [2024-12-14 17:32:15.240553] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.587 [2024-12-14 17:32:15.251058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.587 qpair failed and we were unable to recover it. 00:29:18.587 [2024-12-14 17:32:15.260508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.587 [2024-12-14 17:32:15.260552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.587 [2024-12-14 17:32:15.260568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.587 [2024-12-14 17:32:15.260577] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.587 [2024-12-14 17:32:15.260586] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.271214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.280606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.280650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.280667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.280676] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.280685] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.291247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.300635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.300677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.300693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.300702] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.300710] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.311235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.320689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.320730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.320748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.320757] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.320770] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.331397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.340812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.340858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.340874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.340883] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.340892] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.351455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.360807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.360853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.360870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.360879] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.360888] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.371399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.380841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.380882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.380899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.380908] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.380916] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.391506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.400913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.400950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.400966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.400975] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.400983] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.411403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.420970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.421015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.421031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.421040] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.421048] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.431547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.441102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.441140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.441156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.441164] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.441173] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.451562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.461107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.461146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.461162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.461171] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.461179] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.471683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.481117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.481155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.481172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.481181] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.481189] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.491691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.501214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.501258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.501275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.501288] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.501296] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.847 [2024-12-14 17:32:15.511773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.847 qpair failed and we were unable to recover it. 00:29:18.847 [2024-12-14 17:32:15.521314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.847 [2024-12-14 17:32:15.521350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.847 [2024-12-14 17:32:15.521366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.847 [2024-12-14 17:32:15.521375] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.847 [2024-12-14 17:32:15.521384] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.107 [2024-12-14 17:32:15.531906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.107 qpair failed and we were unable to recover it. 00:29:19.107 [2024-12-14 17:32:15.541394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.107 [2024-12-14 17:32:15.541439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.107 [2024-12-14 17:32:15.541455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.107 [2024-12-14 17:32:15.541464] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.107 [2024-12-14 17:32:15.541473] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.107 [2024-12-14 17:32:15.552079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.107 qpair failed and we were unable to recover it. 00:29:19.107 [2024-12-14 17:32:15.561358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.107 [2024-12-14 17:32:15.561398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.107 [2024-12-14 17:32:15.561414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.107 [2024-12-14 17:32:15.561423] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.107 [2024-12-14 17:32:15.561431] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.107 [2024-12-14 17:32:15.572043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.107 qpair failed and we were unable to recover it. 00:29:19.107 [2024-12-14 17:32:15.581433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.107 [2024-12-14 17:32:15.581479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.107 [2024-12-14 17:32:15.581501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.107 [2024-12-14 17:32:15.581510] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.107 [2024-12-14 17:32:15.581519] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.107 [2024-12-14 17:32:15.592073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.107 qpair failed and we were unable to recover it. 00:29:19.107 [2024-12-14 17:32:15.601528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.107 [2024-12-14 17:32:15.601564] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.107 [2024-12-14 17:32:15.601581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.107 [2024-12-14 17:32:15.601589] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.107 [2024-12-14 17:32:15.601598] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.107 [2024-12-14 17:32:15.611987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.107 qpair failed and we were unable to recover it. 00:29:19.107 [2024-12-14 17:32:15.621651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.107 [2024-12-14 17:32:15.621690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.107 [2024-12-14 17:32:15.621706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.107 [2024-12-14 17:32:15.621715] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.107 [2024-12-14 17:32:15.621724] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.107 [2024-12-14 17:32:15.632006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.107 qpair failed and we were unable to recover it. 00:29:19.107 [2024-12-14 17:32:15.641634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.107 [2024-12-14 17:32:15.641671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.107 [2024-12-14 17:32:15.641687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.107 [2024-12-14 17:32:15.641695] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.107 [2024-12-14 17:32:15.641704] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.107 [2024-12-14 17:32:15.652244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.107 qpair failed and we were unable to recover it. 00:29:19.107 [2024-12-14 17:32:15.661731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.107 [2024-12-14 17:32:15.661769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.107 [2024-12-14 17:32:15.661785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.107 [2024-12-14 17:32:15.661794] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.107 [2024-12-14 17:32:15.661803] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.107 [2024-12-14 17:32:15.672201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.107 qpair failed and we were unable to recover it. 00:29:19.107 [2024-12-14 17:32:15.681812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.107 [2024-12-14 17:32:15.681855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.107 [2024-12-14 17:32:15.681875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.107 [2024-12-14 17:32:15.681884] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.107 [2024-12-14 17:32:15.681892] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.107 [2024-12-14 17:32:15.692153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.107 qpair failed and we were unable to recover it. 00:29:19.107 [2024-12-14 17:32:15.701860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.107 [2024-12-14 17:32:15.701902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.107 [2024-12-14 17:32:15.701918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.107 [2024-12-14 17:32:15.701926] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.107 [2024-12-14 17:32:15.701935] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.107 [2024-12-14 17:32:15.712722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.107 qpair failed and we were unable to recover it. 00:29:19.107 [2024-12-14 17:32:15.721972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.107 [2024-12-14 17:32:15.722013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.107 [2024-12-14 17:32:15.722031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.107 [2024-12-14 17:32:15.722040] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.107 [2024-12-14 17:32:15.722048] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.107 [2024-12-14 17:32:15.732501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.107 qpair failed and we were unable to recover it. 00:29:19.107 [2024-12-14 17:32:15.741865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.107 [2024-12-14 17:32:15.741908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.107 [2024-12-14 17:32:15.741924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.107 [2024-12-14 17:32:15.741933] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.107 [2024-12-14 17:32:15.741942] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.107 [2024-12-14 17:32:15.752446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.107 qpair failed and we were unable to recover it. 00:29:19.108 [2024-12-14 17:32:15.762087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.108 [2024-12-14 17:32:15.762133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.108 [2024-12-14 17:32:15.762148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.108 [2024-12-14 17:32:15.762157] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.108 [2024-12-14 17:32:15.762166] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.108 [2024-12-14 17:32:15.772460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.108 qpair failed and we were unable to recover it. 00:29:19.108 [2024-12-14 17:32:15.782108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.108 [2024-12-14 17:32:15.782145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.108 [2024-12-14 17:32:15.782162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.108 [2024-12-14 17:32:15.782171] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.108 [2024-12-14 17:32:15.782179] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.367 [2024-12-14 17:32:15.792524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.367 qpair failed and we were unable to recover it. 00:29:19.367 [2024-12-14 17:32:15.802115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.367 [2024-12-14 17:32:15.802156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.367 [2024-12-14 17:32:15.802173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.367 [2024-12-14 17:32:15.802182] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.367 [2024-12-14 17:32:15.802190] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.367 [2024-12-14 17:32:15.812695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.367 qpair failed and we were unable to recover it. 00:29:19.367 [2024-12-14 17:32:15.822099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.367 [2024-12-14 17:32:15.822143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.367 [2024-12-14 17:32:15.822159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.367 [2024-12-14 17:32:15.822168] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.367 [2024-12-14 17:32:15.822176] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.368 [2024-12-14 17:32:15.832692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.368 qpair failed and we were unable to recover it. 00:29:19.368 [2024-12-14 17:32:15.842261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.368 [2024-12-14 17:32:15.842298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.368 [2024-12-14 17:32:15.842314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.368 [2024-12-14 17:32:15.842323] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.368 [2024-12-14 17:32:15.842331] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.368 [2024-12-14 17:32:15.852702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.368 qpair failed and we were unable to recover it. 00:29:19.368 [2024-12-14 17:32:15.862434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.368 [2024-12-14 17:32:15.862479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.368 [2024-12-14 17:32:15.862495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.368 [2024-12-14 17:32:15.862509] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.368 [2024-12-14 17:32:15.862517] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.368 [2024-12-14 17:32:15.872887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.368 qpair failed and we were unable to recover it. 00:29:19.368 [2024-12-14 17:32:15.882365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.368 [2024-12-14 17:32:15.882405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.368 [2024-12-14 17:32:15.882421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.368 [2024-12-14 17:32:15.882430] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.368 [2024-12-14 17:32:15.882439] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.368 [2024-12-14 17:32:15.892974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.368 qpair failed and we were unable to recover it. 00:29:19.368 [2024-12-14 17:32:15.902466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.368 [2024-12-14 17:32:15.902514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.368 [2024-12-14 17:32:15.902530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.368 [2024-12-14 17:32:15.902539] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.368 [2024-12-14 17:32:15.902548] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.368 [2024-12-14 17:32:15.913096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.368 qpair failed and we were unable to recover it. 00:29:19.368 [2024-12-14 17:32:15.922579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.368 [2024-12-14 17:32:15.922621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.368 [2024-12-14 17:32:15.922637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.368 [2024-12-14 17:32:15.922646] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.368 [2024-12-14 17:32:15.922654] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.368 [2024-12-14 17:32:15.933035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.368 qpair failed and we were unable to recover it. 00:29:19.368 [2024-12-14 17:32:15.942668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.368 [2024-12-14 17:32:15.942712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.368 [2024-12-14 17:32:15.942728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.368 [2024-12-14 17:32:15.942737] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.368 [2024-12-14 17:32:15.942749] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.368 [2024-12-14 17:32:15.953280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.368 qpair failed and we were unable to recover it. 00:29:19.368 [2024-12-14 17:32:15.962633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.368 [2024-12-14 17:32:15.962672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.368 [2024-12-14 17:32:15.962688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.368 [2024-12-14 17:32:15.962697] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.368 [2024-12-14 17:32:15.962706] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.368 [2024-12-14 17:32:15.973093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.368 qpair failed and we were unable to recover it. 00:29:19.368 [2024-12-14 17:32:15.982798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.368 [2024-12-14 17:32:15.982835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.368 [2024-12-14 17:32:15.982851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.368 [2024-12-14 17:32:15.982860] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.368 [2024-12-14 17:32:15.982869] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.368 [2024-12-14 17:32:15.993258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.368 qpair failed and we were unable to recover it. 00:29:19.368 [2024-12-14 17:32:16.002841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.368 [2024-12-14 17:32:16.002875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.368 [2024-12-14 17:32:16.002891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.368 [2024-12-14 17:32:16.002900] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.368 [2024-12-14 17:32:16.002909] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.368 [2024-12-14 17:32:16.013301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.368 qpair failed and we were unable to recover it. 00:29:19.368 [2024-12-14 17:32:16.022922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.368 [2024-12-14 17:32:16.022961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.368 [2024-12-14 17:32:16.022977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.368 [2024-12-14 17:32:16.022986] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.368 [2024-12-14 17:32:16.022995] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.368 [2024-12-14 17:32:16.033360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.368 qpair failed and we were unable to recover it. 00:29:19.368 [2024-12-14 17:32:16.042896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.368 [2024-12-14 17:32:16.042936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.368 [2024-12-14 17:32:16.042952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.368 [2024-12-14 17:32:16.042961] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.368 [2024-12-14 17:32:16.042969] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.628 [2024-12-14 17:32:16.053331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.628 qpair failed and we were unable to recover it. 00:29:19.628 [2024-12-14 17:32:16.063016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.628 [2024-12-14 17:32:16.063057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.628 [2024-12-14 17:32:16.063075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.628 [2024-12-14 17:32:16.063084] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.628 [2024-12-14 17:32:16.063092] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.628 [2024-12-14 17:32:16.073523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.628 qpair failed and we were unable to recover it. 00:29:19.628 [2024-12-14 17:32:16.082817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.628 [2024-12-14 17:32:16.082856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.628 [2024-12-14 17:32:16.082872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.628 [2024-12-14 17:32:16.082882] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.628 [2024-12-14 17:32:16.082891] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.628 [2024-12-14 17:32:16.093471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.628 qpair failed and we were unable to recover it. 00:29:19.628 [2024-12-14 17:32:16.103200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.628 [2024-12-14 17:32:16.103233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.628 [2024-12-14 17:32:16.103249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.628 [2024-12-14 17:32:16.103258] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.628 [2024-12-14 17:32:16.103267] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.628 [2024-12-14 17:32:16.113726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.628 qpair failed and we were unable to recover it. 00:29:19.628 [2024-12-14 17:32:16.123191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.628 [2024-12-14 17:32:16.123231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.628 [2024-12-14 17:32:16.123250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.628 [2024-12-14 17:32:16.123259] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.628 [2024-12-14 17:32:16.123267] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.628 [2024-12-14 17:32:16.133666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.628 qpair failed and we were unable to recover it. 00:29:19.628 [2024-12-14 17:32:16.143227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.628 [2024-12-14 17:32:16.143264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.628 [2024-12-14 17:32:16.143280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.628 [2024-12-14 17:32:16.143289] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.628 [2024-12-14 17:32:16.143298] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.628 [2024-12-14 17:32:16.153801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.628 qpair failed and we were unable to recover it. 00:29:19.628 [2024-12-14 17:32:16.163360] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.628 [2024-12-14 17:32:16.163404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.628 [2024-12-14 17:32:16.163420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.628 [2024-12-14 17:32:16.163429] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.628 [2024-12-14 17:32:16.163438] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.628 [2024-12-14 17:32:16.173609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.628 qpair failed and we were unable to recover it. 00:29:19.628 [2024-12-14 17:32:16.183377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.628 [2024-12-14 17:32:16.183420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.628 [2024-12-14 17:32:16.183436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.628 [2024-12-14 17:32:16.183446] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.628 [2024-12-14 17:32:16.183454] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.628 [2024-12-14 17:32:16.193923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.628 qpair failed and we were unable to recover it. 00:29:19.628 [2024-12-14 17:32:16.203441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.628 [2024-12-14 17:32:16.203482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.628 [2024-12-14 17:32:16.203511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.628 [2024-12-14 17:32:16.203522] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.628 [2024-12-14 17:32:16.203532] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.628 [2024-12-14 17:32:16.213774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.628 qpair failed and we were unable to recover it. 00:29:19.628 [2024-12-14 17:32:16.223471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.628 [2024-12-14 17:32:16.223517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.628 [2024-12-14 17:32:16.223533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.628 [2024-12-14 17:32:16.223542] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.629 [2024-12-14 17:32:16.223550] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.629 [2024-12-14 17:32:16.233930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.629 qpair failed and we were unable to recover it. 00:29:19.629 [2024-12-14 17:32:16.243612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.629 [2024-12-14 17:32:16.243649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.629 [2024-12-14 17:32:16.243665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.629 [2024-12-14 17:32:16.243674] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.629 [2024-12-14 17:32:16.243682] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.629 [2024-12-14 17:32:16.254078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.629 qpair failed and we were unable to recover it. 00:29:19.629 [2024-12-14 17:32:16.263602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.629 [2024-12-14 17:32:16.263642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.629 [2024-12-14 17:32:16.263658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.629 [2024-12-14 17:32:16.263667] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.629 [2024-12-14 17:32:16.263675] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.629 [2024-12-14 17:32:16.274043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.629 qpair failed and we were unable to recover it. 00:29:19.629 [2024-12-14 17:32:16.283541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.629 [2024-12-14 17:32:16.283588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.629 [2024-12-14 17:32:16.283604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.629 [2024-12-14 17:32:16.283613] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.629 [2024-12-14 17:32:16.283622] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.629 [2024-12-14 17:32:16.294054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.629 qpair failed and we were unable to recover it. 00:29:19.629 [2024-12-14 17:32:16.303632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.629 [2024-12-14 17:32:16.303678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.629 [2024-12-14 17:32:16.303697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.629 [2024-12-14 17:32:16.303706] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.629 [2024-12-14 17:32:16.303715] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.888 [2024-12-14 17:32:16.314184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.888 qpair failed and we were unable to recover it. 00:29:19.888 [2024-12-14 17:32:16.323666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.888 [2024-12-14 17:32:16.323704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.888 [2024-12-14 17:32:16.323720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.888 [2024-12-14 17:32:16.323729] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.888 [2024-12-14 17:32:16.323737] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.888 [2024-12-14 17:32:16.334239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.888 qpair failed and we were unable to recover it. 00:29:19.888 [2024-12-14 17:32:16.343755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.888 [2024-12-14 17:32:16.343787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.888 [2024-12-14 17:32:16.343803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.888 [2024-12-14 17:32:16.343812] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.888 [2024-12-14 17:32:16.343820] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.888 [2024-12-14 17:32:16.354142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.888 qpair failed and we were unable to recover it. 00:29:19.888 [2024-12-14 17:32:16.363831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.888 [2024-12-14 17:32:16.363871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.888 [2024-12-14 17:32:16.363887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.888 [2024-12-14 17:32:16.363896] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.888 [2024-12-14 17:32:16.363904] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.888 [2024-12-14 17:32:16.374310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.888 qpair failed and we were unable to recover it. 00:29:20.826 Write completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Write completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Write completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Write completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Read completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Write completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Write completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Read completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Read completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Write completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Read completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Write completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Read completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Read completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Read completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Read completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Write completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.826 Read completed with error (sct=0, sc=8) 00:29:20.826 starting I/O failed 00:29:20.827 Write completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Write completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Write completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Read completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Write completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Write completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Write completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Write completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Write completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Read completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Read completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Write completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Write completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 Write completed with error (sct=0, sc=8) 00:29:20.827 starting I/O failed 00:29:20.827 [2024-12-14 17:32:17.379493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.827 [2024-12-14 17:32:17.386566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.827 [2024-12-14 17:32:17.386612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.827 [2024-12-14 17:32:17.386630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.827 [2024-12-14 17:32:17.386640] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.827 [2024-12-14 17:32:17.386649] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:29:20.827 [2024-12-14 17:32:17.397317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.827 qpair failed and we were unable to recover it. 00:29:20.827 [2024-12-14 17:32:17.406976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.827 [2024-12-14 17:32:17.407018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.827 [2024-12-14 17:32:17.407034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.827 [2024-12-14 17:32:17.407044] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.827 [2024-12-14 17:32:17.407053] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:29:20.827 [2024-12-14 17:32:17.417590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.827 qpair failed and we were unable to recover it. 00:29:20.827 [2024-12-14 17:32:17.417723] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:20.827 A controller has encountered a failure and is being reset. 00:29:20.827 [2024-12-14 17:32:17.427088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.827 [2024-12-14 17:32:17.427137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.827 [2024-12-14 17:32:17.427164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.827 [2024-12-14 17:32:17.427182] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.827 [2024-12-14 17:32:17.427195] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:20.827 [2024-12-14 17:32:17.437571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.827 qpair failed and we were unable to recover it. 00:29:20.827 [2024-12-14 17:32:17.447079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.827 [2024-12-14 17:32:17.447119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.827 [2024-12-14 17:32:17.447136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.827 [2024-12-14 17:32:17.447145] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.827 [2024-12-14 17:32:17.447154] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:20.827 [2024-12-14 17:32:17.457682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.827 qpair failed and we were unable to recover it. 00:29:20.827 [2024-12-14 17:32:17.457835] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:20.827 [2024-12-14 17:32:17.489809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:20.827 Controller properly reset. 00:29:21.086 Initializing NVMe Controllers 00:29:21.086 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:21.086 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:21.086 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:21.086 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:21.086 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:21.086 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:21.086 Initialization complete. Launching workers. 00:29:21.086 Starting thread on core 1 00:29:21.086 Starting thread on core 2 00:29:21.086 Starting thread on core 3 00:29:21.086 Starting thread on core 0 00:29:21.086 17:32:17 -- host/target_disconnect.sh@59 -- # sync 00:29:21.086 00:29:21.086 real 0m12.546s 00:29:21.086 user 0m27.336s 00:29:21.086 sys 0m3.007s 00:29:21.086 17:32:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:21.086 17:32:17 -- common/autotest_common.sh@10 -- # set +x 00:29:21.086 ************************************ 00:29:21.086 END TEST nvmf_target_disconnect_tc2 00:29:21.086 ************************************ 00:29:21.086 17:32:17 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:29:21.086 17:32:17 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:29:21.086 17:32:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:21.087 17:32:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:21.087 17:32:17 -- common/autotest_common.sh@10 -- # set +x 00:29:21.087 ************************************ 00:29:21.087 START TEST nvmf_target_disconnect_tc3 00:29:21.087 ************************************ 00:29:21.087 17:32:17 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc3 00:29:21.087 17:32:17 -- host/target_disconnect.sh@65 -- # reconnectpid=1510039 00:29:21.087 17:32:17 -- host/target_disconnect.sh@67 -- # sleep 2 00:29:21.087 17:32:17 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:29:21.087 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.993 17:32:19 -- host/target_disconnect.sh@68 -- # kill -9 1508831 00:29:22.993 17:32:19 -- host/target_disconnect.sh@70 -- # sleep 2 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Write completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 Read completed with error (sct=0, sc=8) 00:29:24.373 starting I/O failed 00:29:24.373 [2024-12-14 17:32:20.793757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:24.942 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 1508831 Killed "${NVMF_APP[@]}" "$@" 00:29:24.943 17:32:21 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:29:24.943 17:32:21 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:24.943 17:32:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:24.943 17:32:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:24.943 17:32:21 -- common/autotest_common.sh@10 -- # set +x 00:29:25.203 17:32:21 -- nvmf/common.sh@469 -- # nvmfpid=1510834 00:29:25.203 17:32:21 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:25.203 17:32:21 -- nvmf/common.sh@470 -- # waitforlisten 1510834 00:29:25.203 17:32:21 -- common/autotest_common.sh@829 -- # '[' -z 1510834 ']' 00:29:25.203 17:32:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.203 17:32:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:25.203 17:32:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.203 17:32:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:25.203 17:32:21 -- common/autotest_common.sh@10 -- # set +x 00:29:25.203 [2024-12-14 17:32:21.673184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:25.203 [2024-12-14 17:32:21.673235] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.203 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.203 [2024-12-14 17:32:21.757299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.203 [2024-12-14 17:32:21.793084] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:25.203 [2024-12-14 17:32:21.793203] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.203 [2024-12-14 17:32:21.793213] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.203 [2024-12-14 17:32:21.793222] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.203 [2024-12-14 17:32:21.793344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:25.203 [2024-12-14 17:32:21.793454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:25.203 [2024-12-14 17:32:21.793566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:25.203 [2024-12-14 17:32:21.793567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Read completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 Write completed with error (sct=0, sc=8) 00:29:25.203 starting I/O failed 00:29:25.203 [2024-12-14 17:32:21.798682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.203 [2024-12-14 17:32:21.800238] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:25.203 [2024-12-14 17:32:21.800258] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:25.203 [2024-12-14 17:32:21.800266] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:26.141 17:32:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:26.141 17:32:22 -- common/autotest_common.sh@862 -- # return 0 00:29:26.141 17:32:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:26.141 17:32:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:26.141 17:32:22 -- common/autotest_common.sh@10 -- # set +x 00:29:26.141 17:32:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.141 17:32:22 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:26.141 17:32:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.141 17:32:22 -- common/autotest_common.sh@10 -- # set +x 00:29:26.141 Malloc0 00:29:26.141 17:32:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.141 17:32:22 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:26.141 17:32:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.141 17:32:22 -- common/autotest_common.sh@10 -- # set +x 00:29:26.141 [2024-12-14 17:32:22.582862] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa68ab0/0xa74580) succeed. 00:29:26.141 [2024-12-14 17:32:22.592201] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa6a050/0xab5c20) succeed. 00:29:26.141 17:32:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.141 17:32:22 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.141 17:32:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.141 17:32:22 -- common/autotest_common.sh@10 -- # set +x 00:29:26.141 17:32:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.141 17:32:22 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:26.141 17:32:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.141 17:32:22 -- common/autotest_common.sh@10 -- # set +x 00:29:26.142 17:32:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.142 17:32:22 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:29:26.142 17:32:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.142 17:32:22 -- common/autotest_common.sh@10 -- # set +x 00:29:26.142 [2024-12-14 17:32:22.733165] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:29:26.142 17:32:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.142 17:32:22 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:29:26.142 17:32:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.142 17:32:22 -- common/autotest_common.sh@10 -- # set +x 00:29:26.142 17:32:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.142 17:32:22 -- host/target_disconnect.sh@73 -- # wait 1510039 00:29:26.142 [2024-12-14 17:32:22.804106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.142 qpair failed and we were unable to recover it. 00:29:26.142 [2024-12-14 17:32:22.805740] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.142 [2024-12-14 17:32:22.805759] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.142 [2024-12-14 17:32:22.805768] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:27.521 [2024-12-14 17:32:23.809675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.521 qpair failed and we were unable to recover it. 00:29:27.521 [2024-12-14 17:32:23.811150] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:27.521 [2024-12-14 17:32:23.811167] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:27.521 [2024-12-14 17:32:23.811176] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:28.459 [2024-12-14 17:32:24.815040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.459 qpair failed and we were unable to recover it. 00:29:28.459 [2024-12-14 17:32:24.816490] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:28.459 [2024-12-14 17:32:24.816513] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:28.459 [2024-12-14 17:32:24.816521] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:29.396 [2024-12-14 17:32:25.820329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-12-14 17:32:25.821764] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:29.396 [2024-12-14 17:32:25.821781] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:29.396 [2024-12-14 17:32:25.821789] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:30.333 [2024-12-14 17:32:26.825708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-12-14 17:32:26.827255] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:30.333 [2024-12-14 17:32:26.827271] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:30.333 [2024-12-14 17:32:26.827279] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:31.275 [2024-12-14 17:32:27.831347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.275 qpair failed and we were unable to recover it. 00:29:31.275 [2024-12-14 17:32:27.832949] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:31.275 [2024-12-14 17:32:27.832966] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:31.275 [2024-12-14 17:32:27.832974] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:32.313 [2024-12-14 17:32:28.836743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.313 qpair failed and we were unable to recover it. 00:29:33.250 Write completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Read completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Read completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Read completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Read completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Write completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Write completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Read completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Read completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Read completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Read completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Read completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Read completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Write completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Write completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Read completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.250 Write completed with error (sct=0, sc=8) 00:29:33.250 starting I/O failed 00:29:33.251 Read completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Write completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Write completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Write completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Write completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Write completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Write completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Read completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Write completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Write completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Read completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Read completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Write completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Read completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 Write completed with error (sct=0, sc=8) 00:29:33.251 starting I/O failed 00:29:33.251 [2024-12-14 17:32:29.841791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.251 [2024-12-14 17:32:29.843277] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:33.251 [2024-12-14 17:32:29.843295] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:33.251 [2024-12-14 17:32:29.843303] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:34.188 [2024-12-14 17:32:30.847135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.188 qpair failed and we were unable to recover it. 00:29:34.188 [2024-12-14 17:32:30.848594] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:34.188 [2024-12-14 17:32:30.848611] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:34.188 [2024-12-14 17:32:30.848618] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:35.566 [2024-12-14 17:32:31.852448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.566 qpair failed and we were unable to recover it. 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Write completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 Read completed with error (sct=0, sc=8) 00:29:36.505 starting I/O failed 00:29:36.505 [2024-12-14 17:32:32.857410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.505 [2024-12-14 17:32:32.857436] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:36.505 A controller has encountered a failure and is being reset. 00:29:36.505 Resorting to new failover address 192.168.100.9 00:29:36.505 [2024-12-14 17:32:32.859270] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:36.505 [2024-12-14 17:32:32.859297] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:36.505 [2024-12-14 17:32:32.859309] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:37.443 [2024-12-14 17:32:33.863171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-14 17:32:33.864743] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:37.443 [2024-12-14 17:32:33.864760] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:37.443 [2024-12-14 17:32:33.864768] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:38.380 [2024-12-14 17:32:34.868691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:38.380 qpair failed and we were unable to recover it. 00:29:38.380 [2024-12-14 17:32:34.870408] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:38.380 [2024-12-14 17:32:34.870435] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:38.380 [2024-12-14 17:32:34.870446] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:29:39.315 [2024-12-14 17:32:35.874239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-12-14 17:32:35.875837] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:39.315 [2024-12-14 17:32:35.875853] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:39.315 [2024-12-14 17:32:35.875861] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:29:40.252 [2024-12-14 17:32:36.879737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:40.252 qpair failed and we were unable to recover it. 00:29:40.252 [2024-12-14 17:32:36.879880] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.252 [2024-12-14 17:32:36.879985] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:40.252 [2024-12-14 17:32:36.911123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:40.252 Controller properly reset. 00:29:40.512 Initializing NVMe Controllers 00:29:40.512 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.512 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.512 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:40.512 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:40.512 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:40.512 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:40.512 Initialization complete. Launching workers. 00:29:40.512 Starting thread on core 1 00:29:40.512 Starting thread on core 2 00:29:40.512 Starting thread on core 3 00:29:40.512 Starting thread on core 0 00:29:40.512 17:32:36 -- host/target_disconnect.sh@74 -- # sync 00:29:40.512 00:29:40.512 real 0m19.354s 00:29:40.512 user 1m3.973s 00:29:40.512 sys 0m6.002s 00:29:40.512 17:32:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:40.512 17:32:36 -- common/autotest_common.sh@10 -- # set +x 00:29:40.512 ************************************ 00:29:40.512 END TEST nvmf_target_disconnect_tc3 00:29:40.512 ************************************ 00:29:40.512 17:32:37 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:40.512 17:32:37 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:29:40.512 17:32:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:40.512 17:32:37 -- nvmf/common.sh@116 -- # sync 00:29:40.512 17:32:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:40.512 17:32:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:40.512 17:32:37 -- nvmf/common.sh@119 -- # set +e 00:29:40.512 17:32:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:40.512 17:32:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:40.512 rmmod nvme_rdma 00:29:40.512 rmmod nvme_fabrics 00:29:40.512 17:32:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:40.512 17:32:37 -- nvmf/common.sh@123 -- # set -e 00:29:40.512 17:32:37 -- nvmf/common.sh@124 -- # return 0 00:29:40.512 17:32:37 -- nvmf/common.sh@477 -- # '[' -n 1510834 ']' 00:29:40.512 17:32:37 -- nvmf/common.sh@478 -- # killprocess 1510834 00:29:40.512 17:32:37 -- common/autotest_common.sh@936 -- # '[' -z 1510834 ']' 00:29:40.512 17:32:37 -- common/autotest_common.sh@940 -- # kill -0 1510834 00:29:40.512 17:32:37 -- common/autotest_common.sh@941 -- # uname 00:29:40.512 17:32:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:40.512 17:32:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1510834 00:29:40.512 17:32:37 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:29:40.512 17:32:37 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:29:40.512 17:32:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1510834' 00:29:40.512 killing process with pid 1510834 00:29:40.512 17:32:37 -- common/autotest_common.sh@955 -- # kill 1510834 00:29:40.512 17:32:37 -- common/autotest_common.sh@960 -- # wait 1510834 00:29:40.772 17:32:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:40.772 17:32:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:40.772 00:29:40.772 real 0m40.701s 00:29:40.772 user 2m36.052s 00:29:40.772 sys 0m15.034s 00:29:40.772 17:32:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:40.772 17:32:37 -- common/autotest_common.sh@10 -- # set +x 00:29:40.772 ************************************ 00:29:40.772 END TEST nvmf_target_disconnect 00:29:40.772 ************************************ 00:29:40.772 17:32:37 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:29:40.772 17:32:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:40.772 17:32:37 -- common/autotest_common.sh@10 -- # set +x 00:29:41.032 17:32:37 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:41.032 00:29:41.032 real 21m16.023s 00:29:41.032 user 68m9.302s 00:29:41.032 sys 4m59.570s 00:29:41.032 17:32:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:41.032 17:32:37 -- common/autotest_common.sh@10 -- # set +x 00:29:41.032 ************************************ 00:29:41.032 END TEST nvmf_rdma 00:29:41.032 ************************************ 00:29:41.032 17:32:37 -- spdk/autotest.sh@280 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:41.032 17:32:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:41.032 17:32:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:41.032 17:32:37 -- common/autotest_common.sh@10 -- # set +x 00:29:41.032 ************************************ 00:29:41.032 START TEST spdkcli_nvmf_rdma 00:29:41.032 ************************************ 00:29:41.032 17:32:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:41.032 * Looking for test storage... 00:29:41.032 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:29:41.032 17:32:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:41.032 17:32:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:41.032 17:32:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:41.032 17:32:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:41.032 17:32:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:41.032 17:32:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:41.032 17:32:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:41.032 17:32:37 -- scripts/common.sh@335 -- # IFS=.-: 00:29:41.032 17:32:37 -- scripts/common.sh@335 -- # read -ra ver1 00:29:41.032 17:32:37 -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.032 17:32:37 -- scripts/common.sh@336 -- # read -ra ver2 00:29:41.032 17:32:37 -- scripts/common.sh@337 -- # local 'op=<' 00:29:41.032 17:32:37 -- scripts/common.sh@339 -- # ver1_l=2 00:29:41.032 17:32:37 -- scripts/common.sh@340 -- # ver2_l=1 00:29:41.032 17:32:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:41.032 17:32:37 -- scripts/common.sh@343 -- # case "$op" in 00:29:41.032 17:32:37 -- scripts/common.sh@344 -- # : 1 00:29:41.032 17:32:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:41.032 17:32:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.032 17:32:37 -- scripts/common.sh@364 -- # decimal 1 00:29:41.032 17:32:37 -- scripts/common.sh@352 -- # local d=1 00:29:41.032 17:32:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.032 17:32:37 -- scripts/common.sh@354 -- # echo 1 00:29:41.032 17:32:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:41.032 17:32:37 -- scripts/common.sh@365 -- # decimal 2 00:29:41.032 17:32:37 -- scripts/common.sh@352 -- # local d=2 00:29:41.032 17:32:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.032 17:32:37 -- scripts/common.sh@354 -- # echo 2 00:29:41.032 17:32:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:41.032 17:32:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:41.032 17:32:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:41.032 17:32:37 -- scripts/common.sh@367 -- # return 0 00:29:41.032 17:32:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.032 17:32:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:41.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.032 --rc genhtml_branch_coverage=1 00:29:41.032 --rc genhtml_function_coverage=1 00:29:41.032 --rc genhtml_legend=1 00:29:41.032 --rc geninfo_all_blocks=1 00:29:41.032 --rc geninfo_unexecuted_blocks=1 00:29:41.032 00:29:41.032 ' 00:29:41.032 17:32:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:41.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.032 --rc genhtml_branch_coverage=1 00:29:41.032 --rc genhtml_function_coverage=1 00:29:41.032 --rc genhtml_legend=1 00:29:41.032 --rc geninfo_all_blocks=1 00:29:41.032 --rc geninfo_unexecuted_blocks=1 00:29:41.032 00:29:41.032 ' 00:29:41.032 17:32:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:41.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.032 --rc genhtml_branch_coverage=1 00:29:41.032 --rc genhtml_function_coverage=1 00:29:41.032 --rc genhtml_legend=1 00:29:41.032 --rc geninfo_all_blocks=1 00:29:41.032 --rc geninfo_unexecuted_blocks=1 00:29:41.032 00:29:41.032 ' 00:29:41.032 17:32:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:41.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.032 --rc genhtml_branch_coverage=1 00:29:41.032 --rc genhtml_function_coverage=1 00:29:41.032 --rc genhtml_legend=1 00:29:41.032 --rc geninfo_all_blocks=1 00:29:41.032 --rc geninfo_unexecuted_blocks=1 00:29:41.032 00:29:41.032 ' 00:29:41.032 17:32:37 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:29:41.032 17:32:37 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:41.032 17:32:37 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:29:41.032 17:32:37 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.032 17:32:37 -- nvmf/common.sh@7 -- # uname -s 00:29:41.032 17:32:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.032 17:32:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.032 17:32:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.032 17:32:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.032 17:32:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.032 17:32:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.032 17:32:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.032 17:32:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.032 17:32:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.032 17:32:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.032 17:32:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:41.032 17:32:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:41.032 17:32:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.032 17:32:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.032 17:32:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.032 17:32:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:41.032 17:32:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.032 17:32:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.032 17:32:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.032 17:32:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.032 17:32:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.032 17:32:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.032 17:32:37 -- paths/export.sh@5 -- # export PATH 00:29:41.032 17:32:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.032 17:32:37 -- nvmf/common.sh@46 -- # : 0 00:29:41.032 17:32:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:41.032 17:32:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:41.292 17:32:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:41.292 17:32:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.292 17:32:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.292 17:32:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:41.292 17:32:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:41.292 17:32:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:41.292 17:32:37 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:41.292 17:32:37 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:41.292 17:32:37 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:41.292 17:32:37 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:41.292 17:32:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:41.292 17:32:37 -- common/autotest_common.sh@10 -- # set +x 00:29:41.292 17:32:37 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:41.292 17:32:37 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1513619 00:29:41.292 17:32:37 -- spdkcli/common.sh@34 -- # waitforlisten 1513619 00:29:41.292 17:32:37 -- common/autotest_common.sh@829 -- # '[' -z 1513619 ']' 00:29:41.292 17:32:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.292 17:32:37 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:41.292 17:32:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:41.292 17:32:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.292 17:32:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:41.292 17:32:37 -- common/autotest_common.sh@10 -- # set +x 00:29:41.292 [2024-12-14 17:32:37.768683] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:41.292 [2024-12-14 17:32:37.768737] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513619 ] 00:29:41.292 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.292 [2024-12-14 17:32:37.838958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:41.292 [2024-12-14 17:32:37.876274] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:41.292 [2024-12-14 17:32:37.876428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.292 [2024-12-14 17:32:37.876429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.228 17:32:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:42.228 17:32:38 -- common/autotest_common.sh@862 -- # return 0 00:29:42.228 17:32:38 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:42.228 17:32:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:42.228 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:29:42.228 17:32:38 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:42.228 17:32:38 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:29:42.228 17:32:38 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:29:42.228 17:32:38 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:42.228 17:32:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.228 17:32:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:42.228 17:32:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:42.228 17:32:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:42.228 17:32:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.228 17:32:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:42.228 17:32:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.228 17:32:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:42.228 17:32:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:42.228 17:32:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:42.228 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:29:48.796 17:32:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:48.796 17:32:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:48.796 17:32:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:48.796 17:32:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:48.796 17:32:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:48.796 17:32:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:48.796 17:32:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:48.796 17:32:45 -- nvmf/common.sh@294 -- # net_devs=() 00:29:48.796 17:32:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:48.796 17:32:45 -- nvmf/common.sh@295 -- # e810=() 00:29:48.796 17:32:45 -- nvmf/common.sh@295 -- # local -ga e810 00:29:48.796 17:32:45 -- nvmf/common.sh@296 -- # x722=() 00:29:48.796 17:32:45 -- nvmf/common.sh@296 -- # local -ga x722 00:29:48.796 17:32:45 -- nvmf/common.sh@297 -- # mlx=() 00:29:48.796 17:32:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:48.796 17:32:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.797 17:32:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.797 17:32:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.797 17:32:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.797 17:32:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.797 17:32:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.797 17:32:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.797 17:32:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.797 17:32:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.797 17:32:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.797 17:32:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.797 17:32:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:48.797 17:32:45 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:48.797 17:32:45 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:48.797 17:32:45 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:48.797 17:32:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:48.797 17:32:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:48.797 17:32:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:48.797 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:48.797 17:32:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:48.797 17:32:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:48.797 17:32:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:48.797 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:48.797 17:32:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:48.797 17:32:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:48.797 17:32:45 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:48.797 17:32:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.797 17:32:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:48.797 17:32:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.797 17:32:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:48.797 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:48.797 17:32:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.797 17:32:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:48.797 17:32:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.797 17:32:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:48.797 17:32:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.797 17:32:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:48.797 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:48.797 17:32:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.797 17:32:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:48.797 17:32:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:48.797 17:32:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:48.797 17:32:45 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:48.797 17:32:45 -- nvmf/common.sh@57 -- # uname 00:29:48.797 17:32:45 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:48.797 17:32:45 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:48.797 17:32:45 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:48.797 17:32:45 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:48.797 17:32:45 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:48.797 17:32:45 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:48.797 17:32:45 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:48.797 17:32:45 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:48.797 17:32:45 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:48.797 17:32:45 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:48.797 17:32:45 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:48.797 17:32:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:48.797 17:32:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:48.797 17:32:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:48.797 17:32:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:48.797 17:32:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:48.797 17:32:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:48.797 17:32:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.797 17:32:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:48.797 17:32:45 -- nvmf/common.sh@104 -- # continue 2 00:29:48.797 17:32:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:48.797 17:32:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.797 17:32:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.797 17:32:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:48.797 17:32:45 -- nvmf/common.sh@104 -- # continue 2 00:29:48.797 17:32:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:48.797 17:32:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:48.797 17:32:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:48.797 17:32:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:48.797 17:32:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:48.797 17:32:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:48.797 17:32:45 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:48.797 17:32:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:48.797 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:48.797 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:48.797 altname enp217s0f0np0 00:29:48.797 altname ens818f0np0 00:29:48.797 inet 192.168.100.8/24 scope global mlx_0_0 00:29:48.797 valid_lft forever preferred_lft forever 00:29:48.797 17:32:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:48.797 17:32:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:48.797 17:32:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:48.797 17:32:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:48.797 17:32:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:48.797 17:32:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:48.797 17:32:45 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:48.797 17:32:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:48.797 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:48.797 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:48.797 altname enp217s0f1np1 00:29:48.797 altname ens818f1np1 00:29:48.797 inet 192.168.100.9/24 scope global mlx_0_1 00:29:48.797 valid_lft forever preferred_lft forever 00:29:48.797 17:32:45 -- nvmf/common.sh@410 -- # return 0 00:29:48.797 17:32:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:48.797 17:32:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:48.797 17:32:45 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:48.797 17:32:45 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:49.055 17:32:45 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:49.055 17:32:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:49.055 17:32:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:49.055 17:32:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:49.055 17:32:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:49.055 17:32:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:49.055 17:32:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:49.055 17:32:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.055 17:32:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:49.055 17:32:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:49.055 17:32:45 -- nvmf/common.sh@104 -- # continue 2 00:29:49.055 17:32:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:49.055 17:32:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.055 17:32:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:49.055 17:32:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.055 17:32:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:49.055 17:32:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:49.055 17:32:45 -- nvmf/common.sh@104 -- # continue 2 00:29:49.055 17:32:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:49.055 17:32:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:49.055 17:32:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:49.055 17:32:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:49.055 17:32:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:49.055 17:32:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:49.055 17:32:45 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:49.055 17:32:45 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:49.055 17:32:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:49.055 17:32:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:49.056 17:32:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:49.056 17:32:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:49.056 17:32:45 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:49.056 192.168.100.9' 00:29:49.056 17:32:45 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:49.056 192.168.100.9' 00:29:49.056 17:32:45 -- nvmf/common.sh@445 -- # head -n 1 00:29:49.056 17:32:45 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:49.056 17:32:45 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:49.056 192.168.100.9' 00:29:49.056 17:32:45 -- nvmf/common.sh@446 -- # tail -n +2 00:29:49.056 17:32:45 -- nvmf/common.sh@446 -- # head -n 1 00:29:49.056 17:32:45 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:49.056 17:32:45 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:49.056 17:32:45 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:49.056 17:32:45 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:49.056 17:32:45 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:49.056 17:32:45 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:49.056 17:32:45 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:29:49.056 17:32:45 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:49.056 17:32:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:49.056 17:32:45 -- common/autotest_common.sh@10 -- # set +x 00:29:49.056 17:32:45 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:49.056 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:49.056 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:49.056 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:49.056 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:49.056 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:49.056 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:49.056 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:49.056 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:49.056 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:49.056 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:49.056 ' 00:29:49.315 [2024-12-14 17:32:45.941729] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:51.853 [2024-12-14 17:32:48.010099] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2197930/0x219a180) succeed. 00:29:51.853 [2024-12-14 17:32:48.020334] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2198fc0/0x21db820) succeed. 00:29:52.790 [2024-12-14 17:32:49.262077] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:29:55.327 [2024-12-14 17:32:51.453152] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:29:56.705 [2024-12-14 17:32:53.335434] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:29:58.612 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:58.612 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:58.612 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:58.612 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:58.612 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:58.612 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:58.612 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:58.612 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:58.612 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:58.612 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:58.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:58.612 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:58.612 17:32:54 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:58.612 17:32:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:58.612 17:32:54 -- common/autotest_common.sh@10 -- # set +x 00:29:58.612 17:32:54 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:58.612 17:32:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:58.612 17:32:54 -- common/autotest_common.sh@10 -- # set +x 00:29:58.612 17:32:54 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:58.612 17:32:54 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:58.871 17:32:55 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:58.871 17:32:55 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:58.871 17:32:55 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:58.871 17:32:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:58.871 17:32:55 -- common/autotest_common.sh@10 -- # set +x 00:29:58.871 17:32:55 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:58.871 17:32:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:58.871 17:32:55 -- common/autotest_common.sh@10 -- # set +x 00:29:58.871 17:32:55 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:58.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:58.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:58.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:58.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:29:58.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:29:58.871 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:58.871 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:58.871 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:58.871 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:58.871 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:58.871 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:58.871 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:58.871 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:58.871 ' 00:30:04.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:04.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:04.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:04.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:04.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:30:04.145 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:30:04.145 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:04.145 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:04.145 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:04.145 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:04.145 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:04.145 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:04.145 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:04.145 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:04.145 17:33:00 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:04.145 17:33:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:04.145 17:33:00 -- common/autotest_common.sh@10 -- # set +x 00:30:04.145 17:33:00 -- spdkcli/nvmf.sh@90 -- # killprocess 1513619 00:30:04.145 17:33:00 -- common/autotest_common.sh@936 -- # '[' -z 1513619 ']' 00:30:04.145 17:33:00 -- common/autotest_common.sh@940 -- # kill -0 1513619 00:30:04.145 17:33:00 -- common/autotest_common.sh@941 -- # uname 00:30:04.145 17:33:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:04.145 17:33:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1513619 00:30:04.145 17:33:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:04.145 17:33:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:04.145 17:33:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1513619' 00:30:04.145 killing process with pid 1513619 00:30:04.145 17:33:00 -- common/autotest_common.sh@955 -- # kill 1513619 00:30:04.145 [2024-12-14 17:33:00.501526] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:04.145 17:33:00 -- common/autotest_common.sh@960 -- # wait 1513619 00:30:04.145 17:33:00 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:30:04.145 17:33:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:04.145 17:33:00 -- nvmf/common.sh@116 -- # sync 00:30:04.145 17:33:00 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:30:04.145 17:33:00 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:30:04.145 17:33:00 -- nvmf/common.sh@119 -- # set +e 00:30:04.145 17:33:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:04.145 17:33:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:30:04.145 rmmod nvme_rdma 00:30:04.145 rmmod nvme_fabrics 00:30:04.145 17:33:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:04.145 17:33:00 -- nvmf/common.sh@123 -- # set -e 00:30:04.145 17:33:00 -- nvmf/common.sh@124 -- # return 0 00:30:04.145 17:33:00 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:30:04.145 17:33:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:04.145 17:33:00 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:30:04.145 00:30:04.145 real 0m23.262s 00:30:04.145 user 0m49.486s 00:30:04.145 sys 0m6.071s 00:30:04.145 17:33:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:04.145 17:33:00 -- common/autotest_common.sh@10 -- # set +x 00:30:04.145 ************************************ 00:30:04.145 END TEST spdkcli_nvmf_rdma 00:30:04.145 ************************************ 00:30:04.145 17:33:00 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:30:04.145 17:33:00 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:30:04.145 17:33:00 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:30:04.404 17:33:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:30:04.404 17:33:00 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:30:04.404 17:33:00 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:30:04.404 17:33:00 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:30:04.404 17:33:00 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:30:04.404 17:33:00 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:30:04.404 17:33:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:30:04.404 17:33:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:04.404 17:33:00 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:30:04.404 17:33:00 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:30:04.404 17:33:00 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:30:04.404 17:33:00 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:30:04.404 17:33:00 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:30:04.404 17:33:00 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:30:04.404 17:33:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:04.404 17:33:00 -- common/autotest_common.sh@10 -- # set +x 00:30:04.404 17:33:00 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:30:04.404 17:33:00 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:30:04.404 17:33:00 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:30:04.404 17:33:00 -- common/autotest_common.sh@10 -- # set +x 00:30:10.987 INFO: APP EXITING 00:30:10.988 INFO: killing all VMs 00:30:10.988 INFO: killing vhost app 00:30:10.988 INFO: EXIT DONE 00:30:13.525 Waiting for block devices as requested 00:30:13.525 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:13.525 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:13.525 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:13.525 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:13.525 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:13.525 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:13.784 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:13.784 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:13.784 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:14.043 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:14.043 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:14.043 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:14.043 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:14.303 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:14.303 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:14.303 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:14.562 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:18.761 Cleaning 00:30:18.761 Removing: /var/run/dpdk/spdk0/config 00:30:18.761 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:18.761 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:18.761 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:18.761 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:18.761 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:18.761 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:18.761 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:18.761 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:18.761 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:18.761 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:18.761 Removing: /var/run/dpdk/spdk1/config 00:30:18.761 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:18.761 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:18.761 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:18.761 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:18.761 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:18.761 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:18.761 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:18.761 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:18.761 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:18.761 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:18.761 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:18.761 Removing: /var/run/dpdk/spdk2/config 00:30:18.761 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:18.761 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:18.761 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:18.761 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:18.761 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:18.761 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:18.761 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:18.761 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:18.761 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:18.761 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:18.761 Removing: /var/run/dpdk/spdk3/config 00:30:18.761 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:18.761 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:18.761 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:18.761 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:18.761 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:18.761 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:18.761 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:18.761 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:18.761 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:18.761 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:18.761 Removing: /var/run/dpdk/spdk4/config 00:30:18.761 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:18.761 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:18.761 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:18.761 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:18.761 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:18.761 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:18.761 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:18.761 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:18.761 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:18.761 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:18.761 Removing: /dev/shm/bdevperf_trace.pid1343282 00:30:18.761 Removing: /dev/shm/bdevperf_trace.pid1437704 00:30:18.761 Removing: /dev/shm/bdev_svc_trace.1 00:30:18.761 Removing: /dev/shm/nvmf_trace.0 00:30:18.761 Removing: /dev/shm/spdk_tgt_trace.pid1179267 00:30:18.761 Removing: /var/run/dpdk/spdk0 00:30:18.761 Removing: /var/run/dpdk/spdk1 00:30:18.761 Removing: /var/run/dpdk/spdk2 00:30:18.761 Removing: /var/run/dpdk/spdk3 00:30:18.761 Removing: /var/run/dpdk/spdk4 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1176402 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1177687 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1179267 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1180148 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1185364 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1186923 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1187267 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1187660 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1188092 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1188428 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1188575 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1188784 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1189104 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1190037 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1193170 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1193560 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1193989 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1194041 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1194609 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1194757 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1195200 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1195461 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1195760 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1195799 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1196072 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1196319 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1196725 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1197006 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1197337 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1197641 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1197664 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1197726 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1197992 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1198279 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1198545 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1198836 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1198985 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1199159 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1199407 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1199694 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1199962 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1200245 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1200478 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1200673 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1200833 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1201111 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1201383 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1201667 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1201936 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1202190 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1202336 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1202533 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1202795 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1203082 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1203350 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1203636 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1203841 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1204030 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1204210 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1204493 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1204765 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1205050 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1205319 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1205560 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1205727 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1205932 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1206188 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1206481 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1206751 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1207032 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1207273 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1207474 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1207668 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1207962 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1211901 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1309263 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1313521 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1324104 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1329580 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1333103 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1333928 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1343282 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1343629 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1348231 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1354154 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1356914 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1366930 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1391755 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1395471 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1401121 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1435146 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1436231 00:30:18.761 Removing: /var/run/dpdk/spdk_pid1437704 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1442052 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1449261 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1450084 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1451071 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1451971 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1452393 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1456790 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1456797 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1461374 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1461908 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1462468 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1463252 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1463269 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1465709 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1467591 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1469474 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1471371 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1473261 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1475164 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1481982 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1482535 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1484851 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1486064 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1493069 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1495867 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1501410 00:30:18.762 Removing: /var/run/dpdk/spdk_pid1501651 00:30:19.052 Removing: /var/run/dpdk/spdk_pid1507776 00:30:19.052 Removing: /var/run/dpdk/spdk_pid1508109 00:30:19.052 Removing: /var/run/dpdk/spdk_pid1510039 00:30:19.052 Removing: /var/run/dpdk/spdk_pid1513619 00:30:19.052 Clean 00:30:19.052 killing process with pid 1126750 00:30:37.204 killing process with pid 1126747 00:30:37.204 killing process with pid 1126749 00:30:37.204 killing process with pid 1126748 00:30:37.204 17:33:31 -- common/autotest_common.sh@1446 -- # return 0 00:30:37.204 17:33:31 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:30:37.204 17:33:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:37.204 17:33:31 -- common/autotest_common.sh@10 -- # set +x 00:30:37.204 17:33:31 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:30:37.204 17:33:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:37.204 17:33:31 -- common/autotest_common.sh@10 -- # set +x 00:30:37.204 17:33:31 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:37.204 17:33:31 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:30:37.204 17:33:31 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:30:37.204 17:33:31 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:30:37.204 17:33:31 -- spdk/autotest.sh@383 -- # hostname 00:30:37.204 17:33:31 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:30:37.204 geninfo: WARNING: invalid characters removed from testname! 00:30:55.310 17:33:50 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:55.881 17:33:52 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:57.793 17:33:53 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:59.175 17:33:55 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:00.563 17:33:57 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:02.473 17:33:58 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:03.855 17:34:00 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:03.855 17:34:00 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:31:03.855 17:34:00 -- common/autotest_common.sh@1690 -- $ lcov --version 00:31:03.855 17:34:00 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:31:03.855 17:34:00 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:31:03.855 17:34:00 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:31:03.855 17:34:00 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:31:03.855 17:34:00 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:31:03.855 17:34:00 -- scripts/common.sh@335 -- $ IFS=.-: 00:31:03.855 17:34:00 -- scripts/common.sh@335 -- $ read -ra ver1 00:31:03.855 17:34:00 -- scripts/common.sh@336 -- $ IFS=.-: 00:31:03.855 17:34:00 -- scripts/common.sh@336 -- $ read -ra ver2 00:31:03.855 17:34:00 -- scripts/common.sh@337 -- $ local 'op=<' 00:31:03.855 17:34:00 -- scripts/common.sh@339 -- $ ver1_l=2 00:31:03.855 17:34:00 -- scripts/common.sh@340 -- $ ver2_l=1 00:31:03.855 17:34:00 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:31:03.855 17:34:00 -- scripts/common.sh@343 -- $ case "$op" in 00:31:03.855 17:34:00 -- scripts/common.sh@344 -- $ : 1 00:31:03.855 17:34:00 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:31:03.855 17:34:00 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.855 17:34:00 -- scripts/common.sh@364 -- $ decimal 1 00:31:03.855 17:34:00 -- scripts/common.sh@352 -- $ local d=1 00:31:03.855 17:34:00 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:31:03.855 17:34:00 -- scripts/common.sh@354 -- $ echo 1 00:31:03.855 17:34:00 -- scripts/common.sh@364 -- $ ver1[v]=1 00:31:03.855 17:34:00 -- scripts/common.sh@365 -- $ decimal 2 00:31:03.855 17:34:00 -- scripts/common.sh@352 -- $ local d=2 00:31:03.855 17:34:00 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:31:03.856 17:34:00 -- scripts/common.sh@354 -- $ echo 2 00:31:03.856 17:34:00 -- scripts/common.sh@365 -- $ ver2[v]=2 00:31:03.856 17:34:00 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:31:03.856 17:34:00 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:31:03.856 17:34:00 -- scripts/common.sh@367 -- $ return 0 00:31:03.856 17:34:00 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.856 17:34:00 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:31:03.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.856 --rc genhtml_branch_coverage=1 00:31:03.856 --rc genhtml_function_coverage=1 00:31:03.856 --rc genhtml_legend=1 00:31:03.856 --rc geninfo_all_blocks=1 00:31:03.856 --rc geninfo_unexecuted_blocks=1 00:31:03.856 00:31:03.856 ' 00:31:03.856 17:34:00 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:31:03.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.856 --rc genhtml_branch_coverage=1 00:31:03.856 --rc genhtml_function_coverage=1 00:31:03.856 --rc genhtml_legend=1 00:31:03.856 --rc geninfo_all_blocks=1 00:31:03.856 --rc geninfo_unexecuted_blocks=1 00:31:03.856 00:31:03.856 ' 00:31:03.856 17:34:00 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:31:03.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.856 --rc genhtml_branch_coverage=1 00:31:03.856 --rc genhtml_function_coverage=1 00:31:03.856 --rc genhtml_legend=1 00:31:03.856 --rc geninfo_all_blocks=1 00:31:03.856 --rc geninfo_unexecuted_blocks=1 00:31:03.856 00:31:03.856 ' 00:31:03.856 17:34:00 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:31:03.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.856 --rc genhtml_branch_coverage=1 00:31:03.856 --rc genhtml_function_coverage=1 00:31:03.856 --rc genhtml_legend=1 00:31:03.856 --rc geninfo_all_blocks=1 00:31:03.856 --rc geninfo_unexecuted_blocks=1 00:31:03.856 00:31:03.856 ' 00:31:03.856 17:34:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:03.856 17:34:00 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:03.856 17:34:00 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.856 17:34:00 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.856 17:34:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.856 17:34:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.856 17:34:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.856 17:34:00 -- paths/export.sh@5 -- $ export PATH 00:31:03.856 17:34:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.856 17:34:00 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:31:03.856 17:34:00 -- common/autobuild_common.sh@440 -- $ date +%s 00:31:03.856 17:34:00 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734194040.XXXXXX 00:31:03.856 17:34:00 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734194040.sAuxD1 00:31:03.856 17:34:00 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:31:03.856 17:34:00 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:31:03.856 17:34:00 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:31:03.856 17:34:00 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:31:03.856 17:34:00 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:03.856 17:34:00 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:03.856 17:34:00 -- common/autobuild_common.sh@456 -- $ get_config_params 00:31:03.856 17:34:00 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:31:03.856 17:34:00 -- common/autotest_common.sh@10 -- $ set +x 00:31:03.856 17:34:00 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:31:03.856 17:34:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:31:03.856 17:34:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:03.856 17:34:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:03.856 17:34:00 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:03.856 17:34:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:03.856 17:34:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:03.856 17:34:00 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:03.856 17:34:00 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:03.856 17:34:00 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:31:03.856 17:34:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:03.856 + [[ -n 1072459 ]] 00:31:03.856 + sudo kill 1072459 00:31:04.128 [Pipeline] } 00:31:04.143 [Pipeline] // stage 00:31:04.148 [Pipeline] } 00:31:04.162 [Pipeline] // timeout 00:31:04.167 [Pipeline] } 00:31:04.181 [Pipeline] // catchError 00:31:04.187 [Pipeline] } 00:31:04.202 [Pipeline] // wrap 00:31:04.208 [Pipeline] } 00:31:04.221 [Pipeline] // catchError 00:31:04.230 [Pipeline] stage 00:31:04.233 [Pipeline] { (Epilogue) 00:31:04.246 [Pipeline] catchError 00:31:04.248 [Pipeline] { 00:31:04.260 [Pipeline] echo 00:31:04.262 Cleanup processes 00:31:04.268 [Pipeline] sh 00:31:04.562 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:04.562 1535368 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:04.577 [Pipeline] sh 00:31:04.867 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:04.868 ++ grep -v 'sudo pgrep' 00:31:04.868 ++ awk '{print $1}' 00:31:04.868 + sudo kill -9 00:31:04.868 + true 00:31:04.879 [Pipeline] sh 00:31:05.168 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:05.168 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:31:11.749 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:31:14.303 [Pipeline] sh 00:31:14.592 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:14.592 Artifacts sizes are good 00:31:14.607 [Pipeline] archiveArtifacts 00:31:14.615 Archiving artifacts 00:31:14.779 [Pipeline] sh 00:31:15.125 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:31:15.141 [Pipeline] cleanWs 00:31:15.151 [WS-CLEANUP] Deleting project workspace... 00:31:15.151 [WS-CLEANUP] Deferred wipeout is used... 00:31:15.158 [WS-CLEANUP] done 00:31:15.160 [Pipeline] } 00:31:15.178 [Pipeline] // catchError 00:31:15.190 [Pipeline] sh 00:31:15.472 + logger -p user.info -t JENKINS-CI 00:31:15.480 [Pipeline] } 00:31:15.493 [Pipeline] // stage 00:31:15.498 [Pipeline] } 00:31:15.513 [Pipeline] // node 00:31:15.518 [Pipeline] End of Pipeline 00:31:15.571 Finished: SUCCESS